Skip to content
Report

Let’s get real

The benefits and risks of immersive technologies for impacted communities

Cami Rincon , Jorge Perez

30 October 2025

Reading time: 192 minutes

How to read this report

If you are involved in regulation and policy relevant to immersive technologies:

  • Read the ‘Executive summary’, ‘Introduction’ and ‘Conclusion’ for an overview of how this study was conducted, our key findings and recommendations for policymakers and regulators.
  • Read the sub-section(s) under ‘Impacts of immersive technologies’ that are most relevant to your sector for an overview of the risks and benefits of immersive technologies and the findings from our deliberative workshops. For example, if you are the Health and Safety Executive, you may find the ‘Health and safety’ category most relevant.
  • Read the ‘Governance recommendations’ to see our four key recommendations for how to govern immersive technologies.

If you are a researcher interested in the impacts of immersive technologies:

  • Read the ‘Executive summary’ for an overview of the focus of the report and how we conducted our research, and a synthesis of our key findings.
  • Look over the ‘Impacts of immersive technologies’ section for a comprehensive review of the range of benefits and harms which immersive technologies could have across nine key impact categories.
  • Read the ‘Methodology’ section for details of the multi-methods approach we used to conduct our research.

Executive summary

Immersive technologies are used to enhance how we perceive and interact with digital environments. They facilitate embodied digital experiences, allowing users to interact with each other with levels of realism more like those of in-person interaction than of two-dimensional forms of media.

The term ‘immersive technologies’ encompasses interactive technologies such as virtual reality (VR), augmented reality (AR), mixed reality (MR) and immersive virtual worlds (IVWs). These differ in their position on the virtual–reality spectrum, depending on how much the user loses awareness of the physical world.

For example, AR is positioned closer to the reality end of the spectrum while VR sits at the virtual end; MR sits between these two technologies, given its ability to mix virtual content with real environments.[1] Traditional digital interfaces such as screens typically operate as maps of digital content; immersive technologies transform these maps into explorable territories that users can interact with directly.

Regulatory attention, venture capital interest, consumer focus and media coverage around immersive technologies has waned amid rising excitement around generative AI developments. Nonetheless, specific immersive technologies have attracted substantial enterprise funding and have demonstrated market expansion with enhanced technical capabilities.

However, these specialised applications present distinct hazards. Many operate within high-impact sectors (e.g. healthcare, manufacturing, the military), support safety-critical functions, and involve vulnerable populations including children and patients.

This creates considerable potential for harm and generates new regulatory, policy and ethical challenges that authorities and decision-makers must address.

In addition, immersive technologies are increasingly intertwined with AI systems, with generative AI becoming progressively integrated into immersive platforms for creating synthetic content and managing user interactions.

Regulatory bodies need to reconsider current frameworks or develop new guidance to address existing risks that may be amplified by these integrations and new risks that emerge from products that leverage both types of technology.

The wide-ranging uses of immersive technologies have an array of impacts, driven by factors including technical design, data collection and processing practices, dynamics in the immersive technology product market, and incentives for businesses to adopt these technologies. Some of these impacts are specific to immersive technologies; others are pre-existing impacts of other technologies, such as two-dimensional social media or AI systems.

These impacts can be positive, such as improved autonomy and access to online communities for groups such as people with disabilities and people in rural areas. But immersive technologies also pose several risks across industries, including new forms of harmful content, erosion of interpersonal connections in safety-critical sectors, invasive data collection practices and environmental impacts.

This report is the third publication from the Ada Lovelace Institute project ‘Return to reality: An exploration of immersive technologies’. Our earlier publication What are immersive technologies? provides an overview of how the technology works and the different types and capabilities of immersive technologies.[2] Our paper Reality check adds to this with an overview of the current trends and regulatory landscape in immersive technologies.[3]

This paper concludes the series, expanding our understanding of the potential societal impacts of immersive technologies by identifying key risks and benefits, and making recommendations for the governance of these technologies. It is based on a multi-methods research approach:

  • Literature review, drawing on existing research on immersive technologies and their social impacts and locating evidence gaps.
  • Expert and stakeholder interviews, conducted with 26 developers, investors, academics and practitioners to provide diverse perspectives on the timeline of development, product landscape, technical components and impacts of immersive technologies.
  • Impacted community case studies, developed in collaboration with Involve, a public participation charity, to focus on two current uses of immersive technologies where populations may be particularly vulnerable to impacts:
    • Immersive social worlds (ISWs): We engaged with people who are ISW users to think through the risks and opportunities of ISWs and how these may be heightened by emerging technologies.
    • Augmented reality (AR) for warehouse operations: We engaged with a group of warehouse workers who shared their insights on the use of information overlay through AR glasses, which is used to augment operations within manufacturing and logistics sectors.

We use the term ‘impacted communities’ to describe groups of individuals who currently experience impacts from immersive technologies or who are likely to in the future, either by choice or as a result of their social placement, such as through using technologies for work or to access a service.

We used deliberative techniques to engage with people from impacted communities (ISW users and warehouse workers), using impact assessment frameworks and input from subject matter experts. In each case study, participants surfaced, explored and evaluated the impacts of an immersive technology use case based on their lived experience and knowledge.

This evidence is valuable for policymakers, including regulators. It demonstrates that people from impacted communities can understand the nature of immersive technologies’ impacts in particular social domains.

The contrast between the consumer-based and the business-based case study reinforces the importance of thinking through impacts for specific use cases, which differ depending on the context of deployment, the task being augmented and the unique circumstances of the people impacted.

For instance, in the ISW case study, participants flagged the importance of safeguarding ISWs through regulatory action. Regulation needs to address the specialised nature of use cases to factor in their specific risks and safeguard their unique benefits.

Recommendations

To address the emerging risks, we make recommendations for the governance of immersive technologies. These recommendations are the culmination of our project ‘Return to reality: An exploration of immersive technologies’, which has explored key concepts and definitions of immersive technologies, key trends in the immersive technology landscape, key benefits and risks arising from these technologies, and lessons learned from the two case studies.

Recommendation 1: Increase UK government funding for regulation of emerging technologies.

Recommendation 25 of the AI Opportunities Action Plan, as accepted by government, called for them to ‘commit to funding regulators to scale up their AI capabilities, some of which need urgent addressing’. The government’s latest spending review allocated £2 billion for implementing the plan, but did not specify an allocation for regulatory capabilities.
We recommend that a meaningful portion of this is allocated to empowering regulators to adapt to the impacts of AI, as well as other emerging technologies that require safe and trusted development, including immersive technologies. Our Mission Critical briefing suggests that regulators overseeing societally-critical domains are typically funded in the region of £10-100 million per annum,[4] which may give a sense of the scale needed to meaningfully address the impacts of emerging technologies.

To address the unique risks that arise from the convergence of immersive technologies and AI systems, any budget allocation should in part prioritise regulators positioned to address a significant number of impacts and/or use cases of immersive technologies. This includes the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO), The Office for Product Safety Standards (OPSS) and the Health and Safety Executive (HSE), as well as regulators responsible for high-impact sectors with use cases that either augment safety-critical tasks or impact vulnerable groups, such as education, healthcare, military/security/policing, transport, gaming/entertainment and social media.

Recommendation 2: Regulators and policymakers should continue actively monitoring developments in immersive technologies, with a focus on bottlenecks to adoption.

We commend regulators and policymakers who proactively monitor developments in immersive technologies, and we provide a list of bottlenecks to adoption for regulators to draw on for foresight activities that can inform a preventative approach.
However, although immersive technologies have not reached broad-scale adoption, a variety of immersive technology use cases are currently being deployed in high-impact industries to augment safety-critical tasks, many of which affect vulnerable users such as children and patients – meaning that many use cases carry numerous risk factors. We recommend that regulators urgently address existing use cases through context-specific guidance.

Recommendation 3: Horizontal regulators should provide targeted guidance with specialised consideration of immersive technologies and collaborate with sector-specific peers.

Horizontal regulators should address the gaps in immersive technology governance through targeted guidance. These gaps include a lack of specific ICO guidance on immersive technologies that relate to biometric data collection, and a lack of Health and Safety Executive (HSE) guidance on the deployment of these technologies in workplace contexts.

However, horizontal regulation is often not specific enough to address the nuanced risks that arise from specialised use cases of immersive technologies. Sector-specific regulators should collaborate with horizontal regulators to consider how horizontal regulation could be applied to the use cases within their remit, identify gaps and develop corresponding sector-specific guidance. The first publication of Ada’s ‘Return to reality’ project aims to provide regulators and policymakers with a common understanding of immersive technologies to support such cross-regulatory collaboration.[5]

Recommendation 4: Account for use-case specificity through participatory engagement with impacted people and communities.

It is crucial for regulators to identify stakeholders impacted by specific use cases, including both workers and service users, and engage meaningfully with them. This will lead to a more granular understanding the particular context of deployment of a given use case, of the task being augmented and of the circumstances of impacted people, enabling regulators to adequately address these factors and develop effective risk mitigation strategies.

Summary of findings

The tables below summarise the impacts of immersive technologies, including both new impacts and pre-existing benefits that are enhanced or risks that are exacerbated by immersive technologies, and their drivers. These benefits and risks are further discussed in the ‘Impacts of immersive technologies’ section of the report.

Benefits

Impact category: Market, economy and innovation
Impact New or enhanced Driver(s)
Increased investment in technology leading to economic growth Enhanced Market dynamics
Improved service delivery in some sectors

 

Enhanced Technical design / data collection and processing practices / incentives for business adoption
New options for digital-experience-based training New Technical design
Productivity benefits for businesses Enhanced Technical design
Impact category: Privacy and data protection
Impact New or enhanced Driver(s)
Anonymous access to experiences that resemble in-person social interactions Enhanced Technical design
Support for privacy-enhancing technologies Enhanced Market dynamics
Impact category: Human autonomy
Impact New or enhanced Driver(s)
Increased access to digital experiences Enhanced Technical design
Access to embodied digital experiences New Technical design
Impact category: Interpersonal connection
Impact New or enhanced Driver(s)
Facilitation of embodied digital interactions and collaboration New Technical design
Facilitation of embodied digital experiences aimed at interpersonal skills development New Technical design
Ability to communicate and collaborate with others remotely Enhanced Technical design
Impact category: Freedom of thought, expression and assembly
Impact New or enhanced Driver(s)
New modes for self-expression and identity exploration New Technical design
Increased options for self-expression due to anonymity Enhanced Technical design
Greater accessibility of assemblies Enhanced Technical design
Enhanced tools for creativity and collaboration in business contexts  

Enhanced

Technical design / incentives of businesses
Impact category: Health and safety
Impact New or enhanced Driver(s)
Access to experience-based digital services that promote physical health Enhanced Technical design
Reduced need for engagement in hazardous and safety-critical tasks due to options for digital experience-based training Enhanced Technical design / incentives for business adoption
Potential increase in worker satisfaction due to productivity gains, contingent on company policies on adoption and use Enhanced Technical design / incentives for business adoption
Impact category: Equality
Impact New or enhanced Driver(s)
Access to experience-based digital services and events New Technical design
New accessibility tools enabled by immersive technologies Enhanced

 

Technical design
Impact category: Environmental impacts
Impact New or enhanced Driver(s)
Increasing awareness of climate change through immersive technology applications Enhanced Technical design
Reduction of pollution-generating activities through access to embodied digital experiences New Technical design
Use of digital twins to optimise energy consumption in business contexts New

 

Technical design / incentives of business adoption

Risks

Impact category: Market, economy and innovation
Impact New or exacerbated Driver(s)
Market dominance by few big tech companies

 

Exacerbated Market dynamics / data collection and processing practices
Increased surveillance of workers and management through algorithmic systems

 

 

Exacerbated Market dynamics / incentives for business adoption / technical design / data collection and processing practices
Increased performance evaluation metrics due to greater worker surveillance Exacerbated Technical design / incentives for business adoption / data collection and processing practices
Untested business models for the business adoption of immersive technologies

 

 

New Technical design / incentives  for business adoption / market dynamics
Market dominated by few big tech companies with extensive supply chain control Exacerbated Market dynamics
Dominant companies control development trajectories and reduce competitive pressures Exacerbated Market dynamics
Market concentration enables greater biometric data collection by leading companies New Market dynamics / technical design
Impact category: Privacy and data protection
Impact New or exacerbated Driver(s)
Exacerbation of invasive data collection practices and erosion of different forms of privacy Exacerbated Technical design / market dynamics / incentives for business adoption
Profiling of end users Exacerbated Technical design / data collection and processing practices / incentives for business adoption / market dynamics
Bystander data collection Exacerbated Technical design / data collection and processing practices
Mass concentration of biometric data New Market dynamics / technical design / data collection and processing practices
Public distrust due to opaque data processing practices  Exacerbated Technical design / data collection and processing practices / market dynamics
Large-scale data collection by employers Exacerbated Technical design / data collection and processing practices / incentives for business adoption
The implementation of immersive technologies across business contexts also impacts the privacy of service users Exacerbated Technical design / data collection and processing practices / incentives for business adoption
Impact category: Human autonomy
Impact New or exacerbated Driver(s)
Reduced user agency in purchasing decisions and controlling use of immersive products, driven by hyper-personalisation through biometric data collection Exacerbated Technical design / market dynamics
Creation of immersive deepfakes through biometric data collection New Technical design / data collection and processing practices
Enhanced granularity of 2D deepfakes through biometric data collection Exacerbated Technical design / data collection and processing practices
In business contexts, hyper-personalisation could exacerbate over-reliance on immersive technologies in decision-making Exacerbated Technical design / data collection and processing practices / market dynamics
Reduced worker agency through increased surveillance Exacerbated Technical design / incentives for business adoption / data collection and processing practices
Impact category: Interpersonal connection
Impact New or exacerbated Driver(s)
In-person social interactions and relationships eroded as a result of overindulgence in immersive environments Exacerbated Market dynamics / technical design / data collection and processing practices
Reduced interpersonal interaction between service providers and users due to augmentation Exacerbated Technical design / incentives for business adoption
Desensitisation to human harm in safety-critical sectors (e.g. soldiers teleoperating drones) Exacerbated Technical design / business incentives
Reduced social interaction between colleagues due to increased worker surveillance Exacerbated Incentives for business adoption / technical design / data collection and processing practices
Impact category: Freedom of thought, expression and assembly
Impact New or exacerbated Driver(s)
Chilling effects on freedom of expression and assembly due to increased user surveillance Exacerbated Market dynamics / technical design / data collection and processing practices
Censorship and erosion of freedom of expression due to content moderation practices Exacerbated Technical design
Chilling effects on freedom of expression and assembly in business contexts due to increased worker surveillance Exacerbated Data collection and processing / technical design / incentives for business adoption
Impact category: Health and safety
Impact New or exacerbated Driver(s)
New forms of physical strain, health impacts and safety risks brought about by design of immersive hardware New Technical design
Mental health harms attributed to prolonged use of immersive virtual worlds Exacerbated Technical design / market dynamics / data collection and processing practices
Addiction to immersive technology products Exacerbated Technical design
Targeted harms inflicted by other users Exacerbated Technical design
Vulnerabilities to technical malfunction and physical strain in safety-critical contexts  

Exacerbated

Data collection and processing / technical design / market dynamics / incentives for business adoption
Mental health impacts of increased worker surveillance Exacerbated Technical design / data collection and processing / market dynamics / incentives for business adoption
Impact category: Harmful content
Impact New or exacerbated Driver(s)
Harmful portrayals of social groups in platform designs and through user-generated content Exacerbated Technical design
Targeted harassment and hate speech Exacerbated Technical design
New forms of targeted harm enabled by embodied interactions, including stalking and assaults with physical impacts New Technical design
Mental health impacts of online harms exacerbated by embodied experience Exacerbated Technical design
Harmful socialisation exacerbated by embodied experiences Exacerbated Technical design
Financial crime Exacerbated Technical design
Challenges in real-time immersive interaction and behaviour moderation Exacerbated Technical design
Impact category: Equality
Impact New or exacerbated Driver(s)
Inaccessible hardware and software design Exacerbated Technical design
Exacerbation of digital divide in consumer contexts and service provision Exacerbated Technical design / market dynamics
Financial inaccessibility Exacerbated Technical design / market dynamics
Disproportionate health and safety risks for disabled users Exacerbated Technical design
Biases in content moderation Exacerbated Technical design
Representative bias in synthetic immersive content Exacerbated Technical design
Exacerbation of inequalities in digital service provision due to financial and physical inaccessibility Exacerbated

 

Technical design / market dynamics
Exacerbation of workplace inequalities due to hardware and software inaccessibility Exacerbated

 

Technical design / incentives for business adoption
Exacerbation of disparate vulnerabilities between workers due to privacy and data protection risks Exacerbated

 

Technical design / incentives for business adoption
Impact category: Environmental impacts
Impact New or exacerbated Driver(s)
High energy consumption requirements of immersive virtual worlds Exacerbated Technical design
E-waste due to short immersive technology hardware lifespan Exacerbated

 

Technical design

 

High data-storage energy requirements Exacerbated Technical design

Introduction

From remote surgery to the use of information overlay for live translation,[6] immersive technologies have captured public attention, driven by the new ways they allow users to interact with the digital world. While immersive technologies have not yet been adopted broadly, they have been implemented to augment specific tasks across diverse sectors, ranging from employee training to public service provision.

This brings about distinct impacts driven by factors such as the dynamics of the product landscape, data collection and management practices, the technical design of these technologies, and the incentives for businesses to adopt them.

This is the final paper from our Return to reality project, which addresses the need for resources to support policymakers in governing immersive technologies so that they are safe for the people and communities who use them. It identifies benefits and risks that can arise from the deployment of immersive technologies and makes key recommendations for regulators and policymakers.

The report presents a picture of the range and nature of impacts of immersive technologies at a granular level. We draw on an iterative multi-methods research design that included a literature review, expert interviews and case studies of the lived experiences of ‘impacted communities’. Through this approach we identified, refined and consolidated nine categories of key impact categories, looking at both risks and benefits.

For our impacted community case studies, we developed workshops in close collaboration with the public participation charity Involve. In these workshops we used deliberative techniques informed by established impact assessment frameworks to engage communities impacted by two immersive technology use cases, alongside subject matter experts. The workshop participants explored the impacts of each use case based on their lived experience and knowledge of the use case and the context of deployment. Together, the case studies give insight into both business and consumer deployment contexts.

Case study one: Immersive social worlds

 

The first case study focused on immersive social worlds (ISWs) and engaged 12 active ISW users. ISWs are popular commercial use cases focused on facilitating social experiences through immersive communication and collaboration.

 

We selected this case study because it illustrates the impacts of an important consumer use case, given the relatively broad adoption of ISWs compared with other use cases. Given the popularity of ISWs among children, this use case is also instrumental in highlighting distinct impacts that immersive technologies may have for younger users.

 

In our workshop we brought together active users of platforms such as Second Life, Rec room, VR Chat and Minecraft. Participants had a diversity of experience in terms of how ISWs have affected them, along with in-depth knowledge of the environments and communities that make up ISWs. Through listening to them, we were able to build a more nuanced and comprehensive picture of impacts than has previously been rendered.

 

Through working in deliberative modes, exchanging their experiences and listening to others, participants identified commonalities in outlook and shared experiences as ‘impacted communities’. The most significant of these was how essential ISWs are to their social relationships and connection to community, their sense of inclusion and belonging, and their ability to express themselves. ISW users chose the six impacts most important for them:

  1. Impacts on freedom of expression.
  2. Making new relationships and maintaining old ones.
  3. Bullying, and young people being exposed to inappropriate content.
  4. Accessibility and the digital divide.
  5. User data being misused and harming vulnerable groups.
  6. Potential overindulgence.

Case study two: Augmented reality (AR) for warehouse operations

 

The second case study focused on the use of augmented-reality (AR) glasses to enhance warehouse operations. We brought together seven people with direct experience of working in warehouses, who are likely to be future users of this technology if it is more broadly adopted.

 

AR glasses are aimed at increasing productivity and efficiency by displaying instructions that guide employees through each step of their tasks. While this use case has not yet been broadly adopted, it is currently deployed in some areas of manufacturing and logistics.[7]

 

This case study provides insight into the impacts of immersive technologies in business contexts. Participants had different backgrounds and experiences in warehouse work, as well as different perspectives on their workplace.

 

The diverse insights and experiences of these workers generated a wide-ranging view of impacts, as well as a shared sense of themselves as ‘impacted communities’. Their conversations often revolved around the same themes: that warehouse environments are large, crowded, busy and noisy; that their work is detail-oriented, fast paced, physically demanding and repetitive; and that targets create a high-pressure and precarious work environment that is strictly supervised by managers.

 

Warehouse workers chose the six impacts most important for them:

  1. Invasion of privacy.
  2. Enabling targeting and harassment.
  3. Increased risks to health and safety.
  4. Impact on communication.
  5. Impacts on targets.
  6. Reduced social interaction and connection.

 

While workshop participants’ insights were grounded in their own experiences, they also reflected on how different workers were impacted in different ways in this environment. This allowed us to incorporate a wider range of evidence, based on lived experience and observation of people who may have particular vulnerabilities in relation to this use case, such as people who have English as a second language, workers on zero-hours contracts and LGBTQIA+ workers.

In each impact category discussed in this paper, we provide insights from the workshops, to give real-world examples of how impacts may play out in practice.

We then present key recommendations for the governance and regulation of current and emerging immersive technologies.

These recommendations are the culmination of our project ‘Return to reality: An exploration of immersive technologies’, which has explored key concepts and definitions of immersive technologies, key trends in the immersive technology landscape, key benefits and risks arising from these technologies, and lessons learned from the two case studies.

Impacts of immersive technologies

In this section, we provide an analysis of the impacts of the use of immersive technologies, including both benefits and risks. We note which impacts are new or unique to immersive technologies, and which either enhance existing benefits or exacerbate existing risks.

While the case studies exemplify many of the same impact categories, there are significant differences in how they play out in context. For example, concerns with privacy and data protection are present in both case studies, but they are differently informed by their sector of deployment, the task being augmented and the circumstances of the impacted community.

Immersive social worlds (ISW) users’ concerns revolved around the use of data reflecting their social lives and close relationships to enable targeted advertisements. Privacy and data protection issues discussed in relation to AR glasses were very different, informed by warehouse workers’ insecure working conditions, the hierarchical managerial dynamics and the workplace culture. Concerns revolved around how information about workers might be disclosed to their managers or co-workers and potentially used to enable harassment and micromanagement.

As we discuss in the ‘Governance recommendations’ section of the paper, this demonstrates the need for sector-specific guidance that addresses particular use cases directly, with attention to the contextual factors that shape their distinct impacts.

The impacts discussed in this section are underpinned by common drivers such as dynamics in the immersive technology product market, the incentives for businesses to adopt immersive products, the data collection and management practices of these technologies, and their technical design.

Dynamics in the immersive technology market are discussed extensively in Reality check, the second report from this project ‘Return to reality’. Market dynamics shape the development of immersive products, informing how they are commercialised and the business models behind their design.

Immersive technologies have experienced cyclical waves of hype and decline, yet current market dynamics show sustained developer investment that may lead to benefits such as economic growth.

However, the market landscape is most significantly characterised by the dominance of a handful of key players operating ‘loss leader’ data business models. This presents an range of risks including anti-competitive market practices and the mass concentration of biometric data.

Incentives for businesses to adopt immersive technologies include optimising productivity, improving service delivery and reducing costs. These factors drive many of the impacts on workers and service users discussed in this report, such as potential benefits for service delivery in some sectors and detriment to workers’ agency in conducting tasks.

The above two drivers contextualise the data collection and processing practices of deployers of immersive technologies. These practices increasingly incorporate advanced biometric data collection and processing, user profiling, and the integration of generative AI for user interaction management and content personalisation.

This combination presents significant risks to privacy and data protection, with resulting impacts on the autonomy and freedom of thought, expression and assembly of consumers, service users and workers, among other risks.

Lastly, the technical design of immersive technologies underpins an array of impacts. This includes factors related to the hardware design of immersive products, such as wearable sensors that facilitate the mass collection of biometric data, or limitations within hardware and software development which make products inaccessible to some users.

Specialised functions leveraged by immersive products, such as immersive simulation, communication and collaboration, which facilitate embodied digital experiences and multi-user behavioural interactions, also bring about new impacts. These include options for experienced-based digital training and risks related to moderating multi-user behavioural interactions.

Below we discuss the impacts of immersive technologies, as well as the drivers behind them, by dividing them into nine distinct categories.

Market, economy and innovation

How could immersive technologies impact companies, service delivery and the economy?

Immersive technologies could result in large productivity gains in some sectors and potentially improve services, including education and healthcare, through new equipment, improved procedures and better training for workers.

However, anti-competitive practices may stifle innovation, raise consumer prices and limit consumer choice in the market. At the same time, service provision through immersive technologies may cause distributive justice issues, with those with greater digital literacy and financial resources enjoying greater access to improved services.

Potential benefits

Investments in immersive technologies may generally lead to market growth in the sector, particularly if the technology is adapted at a large scale by companies.

At the same time, immersive technologies could improve service delivery in some industries. For example, it shows promise in education in disciplines such as science, given its ability to visually demonstrate the content being taught.[8] This was emphasised by participants in our case study workshops (‘participants’),[9] with one saying:

‘I think … [immersive technologies have] the capacity for making education much more exciting, much more accessible and in many cases better and more effective.’[10]

An example of this might be allowing students to ‘become’ human cells to observe different organelles in 3D. While the effectiveness of this technology in improving knowledge retention is contested, research seems to suggest that it may improve feelings of presence for students.[11]

Immersive technologies present new options for digital, experience-based training for workers and students. This benefit is unique to immersive technologies given their technical ability to simulate tasks with high levels of realism. Notable examples include practising delicate surgeries,[12] operating machinery in process industries[13] and excavation safety.[14]

Participants particularly emphasised how VR training could alleviate training costs, with one participant saying: ‘Well, very specific training that is very costly. That … requires extreme focus, and for that VR would be obviously the preferred technology.’[15]

Immersive technologies also show the promise of increasing productivity in certain industries. This is particularly the case in manual labour industries such as manufacturing, where immersive technologies such as VR interfaces have augmented productivity by streamlining complex tasks and reducing errors,[16] especially in the case of assembly or disassembly tasks.[17]

This was reiterated by participants, with one stating: ‘You don’t need to interact with a keyboard or a mouse or all that kind of hardware. You just kind of move your arms and it’s a more natural way of interacting … and that can have lots of benefits … can unleash productivity, creativity, et cetera.’[18] However, research on the consistency of this effect across sectors is in its early stages.

Potential risks

There are concerns about anti-competitive practices in the immersive technology sector, which is currently dominated by a small number of big technology companies. These key players have cross-supply-chain control, high levels of patent dominance and large amounts of funding.

In contrast, venture capital funding has been on a downward trend since 2022, with seed rounds and total investments decreasing year on year.[19] This was emphasised by participants,[20] who expressed concerns about monopolies, high entry barriers for smaller companies and the tendency for smaller companies with a competitive edge to be acquired by big companies.[21]

For example, Meta self-promotes through the Meta Store with which its immersive technology devices are exclusively compatible; this increases the entry barrier for third-party app developers.[22]

These market dynamics may also stifle innovation in the sector, as dominant companies can either out-price or purchase smaller competitors. Equally, they may increase consumer prices and narrow consumer product choices in the market.

Such dynamics may also result in the greater collection and concentration of biometric data by dominant technology companies. This is driven by the prevalence of ‘loss leader’ strategies,[23] where companies lose money on hardware and expect to recoup it through their apps[24] and consumer data, driving the anti-competitive dynamics.[25]

Biometric data collected through Meta’s immersive technology may become instrumental to Meta’s business strategy in the same way as data harvesting through platforms such as Facebook, WhatsApp and Messenger.[26]

As described by one participant: ‘It’s all about trying to prop up their own vision for how they can grow their first party apps to eventually turn on their … advertising business model because they’ve been subsidising all of this for years’.[27]

Business models where data collection is central to revenue generation present concerns around the financial incentives for companies to enact invasive data practices that deteriorate user privacy and autonomy.

Companies with broad presence across the supply chain, such as Meta and Microsoft, also benefit from greater collection and processing of user data which can be used to fuel and reinforce their dominance in the market.

This not only risks establishing monopolies in the sector but also exacerbates risks to privacy and autonomy due to the concentration of biometric data within a select few companies.

Dominant technology companies establish the development trajectory for the entire immersive technology market, reducing competition and increasing their influence over innovation pathways.

Meta’s emphasis on consumer products over business solutions shows how market concentration can constrain development in other areas, as its control over platforms and app store curation limits opportunities for diverse innovation.[28]

This lack of competition reduces accountability among dominant companies and narrows the range of products and use cases that developers can viably pursue. The positive reception of Apple’s entry into the immersive technology market with the Apple Vision Pro[29] demonstrates how increased competition can challenge existing players’ control, creating greater accountability[30] and encouraging broader innovation beyond any single company’s restrictive vision of the technology’s potential.[31]

The adoption of immersive technologies by businesses may increase surveillance of workers and management through algorithmic systems. In addition, warehouse workers from our case study workshops pointed out that performance metrics may become more demanding in certain industries . This could also come alongside closer monitoring.[32]

Several participants also acknowledged the business risks for companies adopting immersive technologies, noting that business models for adopting immersive technologies are relatively untested and may not be scalable.[33]

As one participant stated:

‘We’re not building video games where … success means selling a lot of units. We’re building things that … deal with deep and sometimes very intimate … psychological issues.’[34]

Potential issues highlighted by participants included concerns about intellectual property rights in immersive virtual worlds (IVWs), particularly in retail use cases, which may make organisations hesitant to enter the ecosystem.[35]

How immersive technologies may impact workers’ performance

 

‘Targets will grow – there will be more to do and more monitoring.’ ­– Warehouse worker

 

Warehouse workers felt that improvements to productivity resulting from the use of AR glasses to augment their tasks could also result in their targets being increased and more closely monitored, making work more stressful and precarious, and leaving employees prone to health and safety incidents.

 

They suggested regulatory actions to reduce these concerns, such as ensuring that workers are appropriately trained in using the technologies and allocated to tasks that complement their skills; safeguarding their breaks; and protecting them from being made redundant or losing their job if they have challenges in using the technology. Older disabled workers were noted as particularly vulnerable to this risk.

Privacy and data protection

How could immersive technologies impact people’s right to private life? How safe is people’s private data?

Immersive virtual worlds enable user engagement and community interaction that resembles in-person interactions more closely than other technologies, while allowing anonymity through the use of avatars and pseudonyms. Additionally, the underlying infrastructure of some immersive technology products may promote the advancement and adoption of decentralised, privacy-enhancing technologies.

Through their hardware, including sensors and cameras, these technologies can collect a wide range of biometric data, from eye movements to head positions and heart rate, posing new risks to privacy and data protection.

This is exacerbated by the recent integration of generative AI by leaders in the immersive technology industry. Many participants discussed concerns about immersive technologies infringing on users’ privacy.[36] This could include risks that apply across both business and consumer use cases, as well as others which are specific to either domain.

Such risks have many knock-on effects across impact categories, including human autonomy; interpersonal connection; freedom of thought, expression and assembly; equality; and market, economy and innovation. Privacy and data protection is therefore a particularly significant factor to address in safeguarding immersive technologies.

Potential benefits

Immersive virtual worlds allow users anonymous access to embodied digital experiences that resemble in-person social interactions but that may not be possible in person, such as visiting international museums, exploring other countries or communicating and collaborating people from all over the world.

Mass adoption of immersive technologies could also support privacy-enhancing technologies such as blockchain and cryptocurrencies. Many immersive virtual worlds such as Decentraland[37] rely on decentralised digital ownership such as NFTs (non-fungible tokens) and virtual real estate: these technologies support privacy and data protection because they allow people to control their personal information without relying on centralised companies or governments to store and protect it.[38]

Potential risks

In the context of immersive technologies, consumers may be increasingly vulnerable to invasive data collection practices, such as biometric data collection, and to the erosion of different forms of privacy, including local (i.e. privacy of one’s personal space), communicational (i.e. secure messaging and conversation privacy), physical and behavioural privacy.

Existing market dynamics fuel this mass concentration of biometric data. Dominant companies such as Meta or Microsoft will be able to carry out more detailed profiling of end users.

Types of data collected include physiological (e.g. head movement and heart rate),[39] environmental (e.g. visual data about the context of deployment)[40] and positional data (e.g. users’ position and movement).[41]

Data collection practices have intensified with the rise of large language models (LLMs).[42] For example, Meta’s recent revision of its privacy policy for its Ray-Ban smart glasses requires users to manually delete voice recordings if they do not want them to be used in AI training.[43]

Consumer-facing companies may also share data across their various platforms. Another incentive for consumer data collection is the ambition of many of these companies to advance their LLMs: companies such as Meta have begun scraping data from their entire ecosystem to train their products.

The increasing convergence of consumer products with LLMs may open up new forms of consumer user profiling. Examples include the Meta Ray-Ban smart glasses.[44] The natural language interface employed by LLMs prompts the user to disclose more sensitive information.

This is reinforced in the literature reviewed for this paper, which suggests that the conversational style and the affective dynamics that LLMs can foster leads to longer conversations in which users may disclose sensitive information they otherwise would not have.[45]

Such large-scale collection of data may also make consumers more vulnerable to and affected by data leaks. And it may allow dominant companies to better understand audiences, build their LLMs further by training them on user data and create more engaging immersive platforms, further entrenching their dominance in the LLM market.

Consumer use cases of immersive technologies may also pose unique risks of bystander data collection. Products designed to be worn in public spaces, such as smart glasses, may collect bystander information without their consent or awareness through technical features such as cameras and microphones.

This data can be used to create ‘shadow profiles’ – profiles of people who have not explicitly consented to their data being collected, who may not even have used the product.[46] Design measures already exist to promote bystander awareness of immersive technology use, such as the small blinking red light on the Meta Ray-Ban smart glasses, but they tend to be insufficient to reliably achieve this aim.[47]

Finally, as touched on by participants, many companies lack transparency when developing immersive technologies, particularly around data use. This can contribute to public distrust[48] and exacerbate data protection risks such as leaks, cyber-attacks and data theft.[49]

How data collection in immersive social worlds may erode user privacy

 

‘Users should own their data.’ – Immersive social worlds user

 

Immersive social worlds (ISW) users discussed feeling more ‘watched’ in virtual environments than in physical environments, and feeling like they have less privacy. They highlighted that the data collected from users in ISWs goes beyond what is necessary for them to function.

 

They expressed concern about data collection practices and misuse of data, including a lack of consent in data sharing and the use of biometric data to enable targeted advertisements. They also raised concerns about the long-term potential for sophisticated surveillance, given the detail of the data collected and the monopoly-like structure of the ISW market.

 

To resolve these issues, ISW users said that policymakers should consider how to address gaps in data protection legislation brought about by developments in data collection and processing, emphasising the need for meaningful consent, such as opt-in practices, and transparency in how data is used.

Business use cases of immersive technologies may result in large-scale data collection by employers. This could be used to increase surveillance in the workplace, encourage micromanagement[50] and harm worker privacy.

Some interview participants discussed privacy concerns specifically related to employment contexts.[51] One described how algorithmic management systems ‘don’t necessarily create a holistic picture of a worker in their performance’ and set unrealistic expectations of workers which can place them in dangerous situations or result in them not taking adequate breaks.[52]

These risks are pressing given the current rate of adoption of algorithmic management tools in the workplace, with 79 per cent of companies in European countries surveyed by the Organisation for Economic Co-operation and Development (OECD) adopting workplace monitoring.[53]

In contrast to other types of algorithmic management tools, immersive technologies can enable employers to collect biometric data, such as head movements, to be used in assessing employee performance. This may result in job security being dependent on the appropriateness and fairness of evaluation tools: evaluations of workers may reflect how tech-savvy or digitally literate they are rather than how good they are at their job.

Similar concerns were expressed by warehouse workers, who touched on how information overlay glasses would enable surveillance by managers and increase their vulnerability to harassment. This could potentially limit their ability to choose how to perform their tasks, communicate comfortably with their colleagues and assemble at work, or expose protected characteristics to managers.

Examples in the literature highlight that increased collection of biometric data via immersive overlay technologies may allow employers to detect the intellectual disabilities of people in the workforce which might not otherwise be apparent.[54] The fear of this happening could impact users across many of the impact categories by limiting freedom of expression and worsening interpersonal connection and social interaction between employees. This could in turn damage wellbeing and impact equality in the workplace due to disproportionate impacts on workers with intellectual disabilities and/or from marginalised groups.

The implementation of immersive technologies across business contexts also impacts the privacy of service users. These technologies could be used to profile service users in particularly sensitive contexts. For example, their implementation in the education sector could allow for the inference of certain cognitive conditions, such as attention deficit hyperactivity disorder (ADHD), among students.[55]

In a business context, there may be crucial differences in levels of privacy depending on whether access to user data is restricted to the company or open to a third-party technology provider of the technology. This may result in both employees and service users not being aware of the extent to which their data is collected, processed and shared across companies.

How immersive technology adoption may erode workers’ privacy

 

‘Workers should have a choice about what data is used.’ – Warehouse worker

 

Warehouse workers noted that information overlay adoption could exacerbate already significant monitoring practices in their workplace. They described the warehouse environment as one with strict supervision by managers in which they already experienced privacy violations and ‘feeling recorded’ when using audio overlay technologies in the workplace.

 

While they discussed some potential benefits to increased monitoring, such as helping workers to concentrate and be more careful with their work, they also expressed concerns about how the technology could increase micromanagement and reveal personal information to managers, which might increase harassment.

 

Increased monitoring could also limit their ability to choose how to perform their tasks, communicate comfortably with colleagues and assemble at work. They selected ‘invasion of privacy’ as one of their six prioritised impacts, and highlighted the importance of data being processed only when necessary, of workers understanding how their data is used, and of workplace hierarchies being considered in policy interventions to avoid situations where line managers have access to their direct reports’ data.

Human autonomy

How could immersive technologies impact people’s ability to make free, independent and well-informed decisions about their lives?

From a virtual rollercoaster experience[56] to a first-person tour of Mars,[57] immersive technologies offer new experiences, services and interaction that might otherwise have been inaccessible due to circumstances such as physical disabilities and financial constraints, thus enhancing users’ sense of autonomy. At the same time, profiling practices and personalised content generation, alongside monitoring and immersive evaluation, could actively undermine the autonomy of both consumers and workers.

Potential benefits

Immersive technologies could support human autonomy by providing users with increased access to experiences and environments they can inhabit. While incumbent technologies which are not immersive, such as online video platforms and video games, may similarly allow users to experience new things from the comfort of their own home, the embodied digital experiences enabled by immersive technologies more closely resemble in-person interactions.

Relevant use cases can vary from immersive simulations of experiences that might otherwise be inaccessible, to immersive virtual worlds where users can interact with others regardless of their geographical location.[58]

Interview participants[59] touched on how immersive simulation and communication functions can offer experiences of different time periods and geographical locations,[60] as well as fictional environments that do not exist in physical form, such as other universes.[61] As illustrated by one participant: ‘You go in places where you wouldn’t be able to be, but also to experience something that would not be accessible in your real existence.’[62]

Potential risks

Consumers may be exposed to user interaction management techniques such as profiling, nudging and targeted content that pose risks to their ability to make autonomous decisions. This risk is not unique to immersive technologies, but it is exacerbated by the biometric collection capabilities of wearable immersive technology hardware, and by profiling algorithms that collect data on users’ biomarkers and make inferences about their psychographic (for example, values, interests) and demographic (for example, gender, age) characteristics.[63] As described by one interview participant:

‘Biometric data could be used to kind of nudge you into certain decisions.’[64]

Profiling algorithms pose risks to users’ ability to make autonomous purchasing choices. Users may be exposed to personalised and targeted advertisements and other messages based on their inferred characteristics to steer them into buying certain products.[65]

Interview participants discussed how the graphic quality of immersive virtual worlds and their capacity to provide embodied experiences can create very emotionally evocative experiences.[66] This can be beneficial, such as in the use of these technologies for psychological treatment,[67] but targeted content may also evoke emotional responses from users in ways that challenge their decision-making.

As one participant put it, these advertisements may be ‘much more effective on your decision-making and much more influential compared with the current cases of behavioural advertising and all that we have in online platforms’.[68]

Targeted content, however, is not confined to virtual worlds: it can be delivered across various platforms accessed through both immersive or non-immersive technologies.

For example, data gathered from Meta’s Quest headsets is used to create a unique advertising ID for each user, which can then be used to personalise advert selection on the Meta Horizon platform.[69] This may undermine user decision-making and autonomy. As one participant stated:

‘How ethical is it to change an ad based on my heart rate?’[70]

While live ad adaptation is not currently used by companies, consumer neuroscience research suggests that it may be feasible in the future.[71]

User interaction management techniques may challenge users’ ability to limit their use of immersive technologies. ISW users described how immersive virtual worlds are designed to foster addictive behaviours among users that can lead to overdependence.

These design choices include user avatars which promote psychological ownership through customisation capabilities and investment in digital identities such as virtual apparel,[72] virtual assets such as real estate with cross-platform asset ownership,[73] and virtual alternatives to physical interaction.[74]

The adoption of these features is driven by financial incentives for developers[75] and is supported by the integration of LLMs into consumer products, such as Meta’s AI assistant integration in some of its Meta Quest headsets. This integration of LLMs allows for more granular personalisation which continually adapts to the users’ state and environment.

Through multi-turn interactions between the LLM and the user, as well as other information that is being fed into the LLM, such as video, transcripts, voice recording and images, models can produce hyper-personalised content and responses to user requests. This creates a feedback loop: greater personalisation drives greater engagement, which in turn drives greater data collection.[76]

While much LLM integration in consumer use cases has centred around AI assistants, such as Meta AI, several other uses have also appeared, including LLM-powered non-playable characters in immersive virtual worlds, and AI companions or characters such as EMooly.[77]

Finally, the granularity and quality of the biometric data collected through wearables may support the creation of highly detailed ‘immersive deepfakes’ of consumers.[78] This could undermine user autonomy by stripping users of control over their identity and image in digital spaces, in turn raising risks of reputational harm and misinformation.

Interview participants discussed this as an area of concern.[79] One participant noted:

‘I have an embodied presence in a game and I have an avatar that is me, and I have my voice and the way I move. Theoretically you could capture all of that data … moving through the space: me talking, my avatar, and you could feed that into a model that could then recreate virtual [name] and … go into virtual worlds and meet people.’[80]

Another participant also emphasised the distinct and high-impact consequences of impersonation if a person’s avatar is ‘stolen and people pretend to be that person’.[81]

There are few protections in UK law that specifically address impersonation or ‘rights to identity’ for anyone other than celebrities, although some European jurisdictions are considering them.[82]

How immersive social worlds may reduce user autonomy through addictive design

 

‘They are like cigarette companies.’ – ISW user

 

ISW users selected ‘potential overindulgence’ as one of their six prioritised impacts, referring to the possibility of the technology fostering addiction. While this overlaps with impacts related to health and safety due to its potential harm on users’ mental health, it also poses a risk to human autonomy: users’ reported feeling that they had limited control over how much they engaged with ISW platforms due to the psychological dependence that features within these platforms can trigger.

 

They discussed how ISW platforms are financially incentivised and designed to keep users on the platform, which may result in technology addiction, overindulgence and dependence.

 

Crucially, they suggested that users who do not have a strong social circle, are neurodivergent and/or are from low-income households are particularly vulnerable to this risk.

In business contexts, hyper-personalisation could exacerbate over-reliance on immersive technologies in decision-making contexts, as the intimate and persuasive nature of personalised interactions might lead professionals to place excessive trust in AI recommendations without adequate scrutiny or verification, eroding workers’ agency in decision-making.

For example, an industrial study which implemented AR-based digital instructions in a factory found that participants reported a loss of agency when using this technology.[83] This perceived loss of autonomy could be exacerbated by workplace monitoring practices using immersive technologies.

Business uses where employees adopt immersive technologies to perform their tasks often entail the collection of biometric data, which could reduce employee agency through increased surveillance.

Employers could use such data to monitor employees and conduct evaluations to infer attributes such as productivity. Increases in worker monitoring may limit employee agency over how to conduct their tasks, particularly in sectors where strict supervision is already present.

How immersive technology may erode workers’ autonomy

 

‘It makes it easier for managers to target employees.’ – Warehouse worker

 

Warehouse workers expressed concerns around the impacts that immersive technology adoption could have on ways of working through increased monitoring. They noted that policymakers should prevent ‘the little freedom workers have’ from being rolled back as a result of the adoption of these technologies.

 

They discussed how, in an environment of strict supervision, a greater visibility to management of workers’ pace of work could exacerbate micromanagement and increase pressures to work faster.

 

They also discussed how increased monitoring would reduce social interactions and collaboration with colleagues while performing tasks, both of which were identified as key ways of alleviating the challenges of their work, increasing their productivity and varying their tasks.

 

These impacts illustrate how the adoption of this technology may erode employees’ already limited agency. In addressing these concerns, warehouse workers highlighted the importance of safeguarding job security and ensuring that workers have enough freedom to interact socially while performing their tasks.

 

They commented that policymakers should encourage changes within existing workflows, such as a shift towards group work and targets rather than individual workflows.

Interpersonal connection

How could immersive technologies affect connection, dialogue and social interactions between people?

Through specialised functions, including immersive communication and collaboration, immersive technologies – and in particular the immersive virtual worlds that they can be used to create – present opportunities to augment how people communicate and interact. But their implementation may also hinder in-person social skills, and they could replace or limit workplace communication when adopted in business contexts.

Potential benefits

Immersive virtual worlds (IVWs) may have a unique benefit in facilitating and encouraging new, embodied digital interactions and collaboration between users. Indeed, benefits in terms of interpersonal connection were mentioned by many participants.[84]

One participant described how ISWs come ‘closest so far to really feeling you’re immersed and socialising with friends when you’re apart’,[85] while another commented that they can provide people with ‘enormous amounts of friendship, companionship [and] sharing community’.[86]

By eliminating barriers to socialisation, these benefits may be particularly helpful for people who struggle to connect with others in person.[87] This was supported by ISW users in the first case study workshop for this report, who commented that the optional anonymity of platforms allowed users to distance themselves from anxieties relating to physical appearance or social abilities.

Participants discussed how ISWs can mitigate loneliness, providing experiences that help users in coping with their physical contexts.[88] One participant illustrated this through an example of people who may not have opportunities to express themselves romantically or sexually through in-person relationships but can feel ‘a bit more human’ when doing so through ISWs.[89]

Similarly, users who might otherwise have difficulty engaging in collaborative work in an employment context might communicate more effectively and productively through the medium of ISWs, making work environments more inclusive.

For example, research suggests that those with autism spectrum disorder may particularly benefit from technologies such as immersive virtual meetings, as they may be less overstimulating than in-person meetings and are able to customise the experience according to their preferences. For example, they may adjust lighting and volume or mute other attendees.[90]

ISWs also offer opportunities for individuals to meet and collaborate in an embodied way beyond geographical limitations, allowing them to engage with people and communities that may otherwise be inaccessible due to limited user mobility, geographical location or social factors. For example, Thrive Pavilion is a Horizon Worlds community aimed at older users who might otherwise struggle to meet others due to mobility constraints.[91]

ISWs stand out from other forms of digital interaction given the level of immersion they offer, which more closely resembles in-person interaction. This was supported by participants, who spoke about traditional forms of social media being ‘dehumanised’, as ‘words on a page’[92] providing passive forms of interactions, with users navigating platforms through scrolling.

They compared this with more participatory and realistic ISWs where users are ‘actively participating and immersed’.[93] As illustrated by one participant, ‘communications feels more like face to face and more interactive … we get more of [an] experience of talking to a person.’[94]

Other specialised functions of immersive technologies, such as realistic immersive simulations of environments, have also been used to support people in developing interpersonal skills for in-person interactions. For example, virtual reality exposure therapy seeks to treat social anxiety disorders.[95]

More broadly, VR has also been used to increase empathy in various contexts,[96] such as to promote humanitarian aid[97] and increase awareness of suffering caused by neurodegenerative diseases.[98] This was reflected by participants, with one calling VR an ‘empathy machine’.[99]

However, research on this aspect of immersive technologies is in its infancy, and several ethical problems surround the gap between promoting empathy and prompting changes in behaviour.[100]

Ethical concerns were also raised by participants, with one stating: ‘A lot of the dialogue around it is a little bit problematic … that by putting on a VR headset you can understand what it’s like to be a refugee.’[101]

How ISWs could support users in establishing and maintaining relationships and community

 

‘These worlds and communities are really important to users.’ – ISW user

 

ISW users emphasised the benefit of ISWs in enabling interpersonal connections. They shared the ways in which ISWs had supported them to establish and maintain relationships with friends, partners and groups of people with shared interests and how ISWs had enabled them to feel included and accepted by others.

 

They noted that ISWs are often the main form of communicating and socialising for user communities, many of which are vulnerable and rely on ISWs to feel safe. Consequently, ISW users selected ‘making new relationships and maintaining new ones’ as one of six prioritised impacts.

 

They highlighted the importance of ensuring that policies regulating ISWs minimise the risk of platforms being closed, as this would mean that users lose these vital social connections. They also mentioned that use of ISWcould impact in-person interpersonal relationships, as users may have less incentive to interact with people, experiences and responsibilities in person.

 

ISW users noted that this could be particularly concerning in relation to vulnerable users with ‘poor social skills’, neurodivergent users and low-income users who might not be able to afford activities in the physical world.

Potential risks

Overindulgence in IVWs may erode users’ ability to interact in person with others. While some studies show that immersive technologies can help users to exercise and improve their ability to interact socially, the use of IVWs can also have the opposite effect, where users’ ability to interact with others in person is eroded by overindulgence in virtual worlds, potentially leading to social isolation.[102]

At the same time, the effectiveness of immersive technologies are reducing social anxiety in therapeutic use cases such as virtual reality exposure therapy remains heavily contested.[103]

Immersive technologies may also reduce human interaction between service providers and users. This could be particularly important in safety-critical services. For example, the use of immersive teleoperation in healthcare, such as in remote surgery, or virtual therapy could reduce or even eliminate physical interaction between practitioners and patients.

This could be particularly impactful in contexts where immersive technologies are instrumental to successful surgeries but practitioner and patient interaction is also crucial, such as specialised surgical procedures where the patient is conscious.[104]

The reduction of human interaction could have effects on users in other contexts too. For example, the use of immersive teleoperation for military warfare, such as VR headsets for operating drones, may desensitise military personnel to human harm.[105]

Use cases that leverage immersive technology functions to augment workers’ tasks are likely to present challenges to social relations between workers by increasing worker monitoring. Warehouse workers discussed how they would not feel comfortable engaging in casual conversations with their colleagues while using visual overlay glasses, due to concerns that their managers could listen to them.

Business adoption of immersive technologies could also present particular communication challenges for workers who have difficulties using these technologies. This may particularly impact disabled users and users whose native language is not supported by devices.

How immersive technologies may reduce workers’ social interactions

 

‘I like chatting with other colleagues, but it’s not allowed.’ – Warehouse worker

 

Warehouse workers selected ‘reduced social interaction and connection’ as one of their six prioritised impacts. They discussed how conversations with colleagues were crucial to alleviating the challenges of their often detail-oriented, fast-paced physically demanding and repetitive day-to-day work.

 

They also described the benefits of collaborating with colleagues in helping them develop skills, be productive and vary their tasks. They noted that adopting AR glasses to augment their tasks would limit their ability to socialise, as they would not feel comfortable talking to others while wearing such devices, and that this could leave them feeling more overloaded with work.

 

They also discussed how the use of these technologies would result in one-sided interactions, where managers are able to speak to workers through the devices but not the other way around.

Freedom of thought, expression and assembly

How could immersive technologies impact people’s right to think freely, to hold beliefs, to express themselves, and to assemble?

Immersive technologies offer new modes of engagement in both consumer and business use cases, enabling creative self-expression, collaboration and gatherings.

As a result, they have potential to create new opportunities for individuals to exercise their rights to freedom of thought, expression and assembly. However, these technologies also introduce concerns around surveillance and content moderation practices that may significantly suppress these freedoms.

Potential benefits

Immersive social worlds open new modes of self-expression and identity exploration that are not possible in physical environments. This is particularly noticeable in consumer use cases, where ISW users can have customisable avatars, equip these avatars with virtual fashion items, design buildings, produce digital art and create immersive experiences for other users.[106]

These tools can empower users to construct environments, personas and communities that represent their beliefs, identities and values, which can be particularly beneficial for members of marginalised communities, such as LGBTQIA+ communities, that may have more difficulties expressing themselves in their physical environments.

This was reflected in the first case study workshop for this research, with one participant stating:

‘People have the opportunity to portray themselves in kind of whatever way they want to and have [an] avatar that might … represent physical identity that they might want to have at some point but haven’t yet’.[107]

The anonymity and ability to use aliases in virtual environments can further encourage freer expression, especially for individuals from marginalised groups who may feel unsafe disclosing aspects of their identity offline.

Immersive spaces may foster cultural exchange and collective action, enabling gatherings and assemblies that might otherwise be inaccessible due to geography, safety concerns or political restrictions.[108] This may also increase accessibility to protests or cultural events for users with limited mobility or other constraints on attending such activities in person.[109]

Business uses of immersive technologies may uniquely increase employee creativity and collaboration. In particular, the use of immersive visualisation, communication and collaboration may enhance employees’ ability to express themselves creatively in design-led industries, as well as facilitating collaboration within companies and across industries. For example, immersive simulation is increasingly being used in architectural contexts, given its proven ability to enhance various stages of the design process.[110]

Potential risks

The highly sensitive data collection capabilities of immersive technologies could increase user surveillance, deteriorating freedom of expression, thought and assembly. This impact could vary significantly between consumer and business use cases.

ISW users touched on the fact that in consumer use cases, pervasive data collection could lead to users hesitating to express themselves or participate fully due to fears and uncertainty about how their data might be used outside of the platform in ways that could harm them.[111]

This included concerns that data could be used by law enforcement agencies for immigration control[112] or to infer sensitive personal attributes for informing targeted advertisements. For example, inferences about sexual orientation have been used to target advertisements in ways that have outed individuals.[113]

Content moderation in ISWs presents another major challenge, with the potential to result in censorship and erode freedom of speech. There is often a tension between ensuring safe virtual environments and enabling free speech.

While moderation is essential to curb harmful content and behaviour, biased moderation can result in the censorship of political beliefs and the silencing of marginalised voices. This was discussed by ISW users in the first case study workshop, and research has shown that moderation practices may disproportionately censor marginalised communities, limiting their ability to participate in immersive social environments and curbing the potential benefits to freedom of thought, expression and assembly.

For example, transgender people and Black people have been found to have their accounts and content removed from social media platforms more frequently than those from other groups.[114]

The censorship of political views could also arise in ISWs due to confusion about how country-specific definitions of what constitutes illegal content can be applied in a universal international space.[115]

Content moderation is particularly difficult given new forms of content in ISWs that are not easily detected by moderation algorithms, as well as the difficulty of moderating live conduct. As touched on by one participant: ‘You’re also dealing with non-verbal speech … with the creation of virtual objects and virtual environments. Is that a form of speech or not? Should it be considered speech, and if so, how?’[116]

Uncertainty about how to moderate such forms of expression has led to platforms relying on less effective approaches, such as individual reporting and volunteer moderators.[117]

In the workplace, immersive technologies might track employee behaviours in ways that also limit free expression. The use of immersive technologies to improve employee productivity, such as information overlay for factory workers, or immersive collaboration for product design may allow employers to track employee data, allowing for increased surveillance and micromanagement.

This would erode self-expression, communication and sharing of beliefs in workplaces, and could in turn prevent forms of assembly, such as unionising. This effect can be cross-cutting across the impact categories discussed.

How content moderation in ISWs may reduce users’ freedom of expression

 

‘Moderators are the only people who have actual power.’ – ISW user

 

ISW users selected ‘impacts on freedom of expression’ as one of their six prioritised impacts. In the ‘principle for change’ drafted for this impact, they stated that issues with content moderation pose risks to the freedom of thought, expression and assembly of ISW users.

 

They highlighted a belief that content moderators have disproportionate power, such as the ability to remove other users, and that their actions may particularly impact individuals and communities who already feel that they have limited freedom of speech in online spaces, and remove users’ access to valuable online communities and relationships.

 

In turn, this may limit the benefits of immersive technologies for such communities. These benefits include increased agency and self-expression, because of the ability to choose how characteristics such as gender and ethnicity are presented, therefore moderating experiences of discrimination and allowing users to make social connections and relationships with more ease than in physical environments.

 

ISW users noted that moderators may have biases that limit their ability to maintain online safety without excluding certain voices or disallowing a diversity of opinions. ISW users believed that instead, freedom of expression of all kinds, excluding harassment, should be allowed in immersive social worlds.

How increased monitoring enabled by immersive technologies may exacerbate workplace harassment and limit freedom of expression

 

‘There is harassment about football, religion, sexuality, the way people talk, anything.’ –Warehouse worker

 

Warehouse workers highlighted how adoption of immersive technologies could increase surveillance and harassment by managers, eroding workers’ freedom of thought, expression and assembly. As a result, ‘enabling targeting and harassment’ was selected as one of the six key prioritised impacts in this workshop.

 

Participants discussed strict supervision, micromanagement and the prevalence of harassment as key factors shaping their experience of work. In the ‘principle for change’ warehouse workers drafted for this impact, they highlighted how the increased monitoring that would accompany the adoption of vision picking (AR technology which delivers visual instructions to guide warehouse workers during picking operations) would likely increase harassment by managers.

 

Discussions in the workshop included concerns about how the technology could reveal workers’ characteristics, such as accents, religions, sexuality and leisure interests, to managers, contributing to their ability to target workers. This would be likely to limit workers’ freedom of expression and assembly in the workplace, due to fear of harassment. To prevent this risk, warehouse workers noted, there need to be limitations on the data that managers can access through immersive technology devices integrated in the warehouse.

Health and safety

How could immersive technologies impact people’s physical and mental health?

Immersive technologies deliver new use cases which may promote greater health and safety for businesses which adopt this technology, providing alternatives to otherwise dangerous tasks conducted by workers, and consumers who use the technology, through products aimed at promoting healthy behaviours and use cases aimed at treating mental health conditions.

However, the potential benefits could be undermined by risks arising from implementation without appropriate safeguards, the physical and cognitive strain which often accompanies use, and new vulnerabilities which arise for users from the data collection and privacy implications of the technology.

Potential benefits

Emerging products aimed at consumer users of immersive technologies such as VR promote greater physical health. Examples include Supernatural, an app provided on the Meta Quest which promotes exercise through gamified fitness experiences, meditation guidance and guided stretches.[118]

Moreover, immersive technology consumer use cases such as gaming tend to involve more physical activity than non-immersive counterparts, given the use of motion sensors, and body tracking which allow a user’s physical movements to be translated to in-game movements. For example, the VR game Beat Saber promotes exercise through exercise-based mechanics, where users swing a virtual sword to the rhythm of music.[119]

In addition, the interpersonal connections facilitated by ISWs may support the mental health of users who may struggle to establish social connections in person.[120] Immersive technologies could also improve the accessibility of mental health treatment. For example, delivering cognitive behavioural therapy through VR teletherapy allows for therapeutic counselling which more closely mimics in-person interaction than other digital alternatives, such as video calling.[121] This was noted as a particular benefit by participants.[122]

At the same time, several critiques of this use case have noted potential psychological risks such as increased patient anxiety,[123] the erosion of therapist–client relationships, and technical constraints that intervene in patients’ experience, including connectivity issues.[124]

Business adoption of immersive technologies may bring about physical and mental benefits for workers. The use of immersive technologies in training may reduce the need for workers to engage in otherwise hazardous and safety-critical tasks. This has been found to be effective in safety-critical industries including construction, mining, firefighting and the chemical industry.[125] As one workshop participant stated: ‘There are certain situations where it’s way better to do it in VR because of those certain, you know, contexts and situations, like dealing with nuclear training, different stuff like that.’[126]

The implementation of immersive technology functions such as immersive teleoperation may also decrease the need for workers to put themselves in safety-critical situations to conduct certain tasks. For example, drones operated through immersive teleoperation remove the need for pilots during military warfare.[127]

Research also suggests that the implementation of immersive technologies in certain occupations may increase worker satisfaction and productivity. For example, creative industries such as those in the design space have reported time and cost savings for employees as a result of the productivity benefits which immersive technologies can offer beyond a traditional flat screen.[128]

However, evidence to support these claims remains limited, and positive impacts may vary between occupations and may be contingent on training support and other contextual factors related to company policies on adoption and use.

This was reflected in the case study workshop, where warehouse workers were generally positive about the implementation of information overlay in a warehouse environment but expressed concerns such as increased risk of accidents while operating machinery such as LLOPs (low-level order pickers).

Potential risks

The literature suggests that consumers may experience new forms of physical strain due to immersive technology hardware design, and this was supported by participants.[129] This includes discomfort, neck strain and cyber-sickness, whereby users experience symptoms similar to motion sickness.

Consumer use of immersive technologies may also come with other physical risks to the user, with limited spatial awareness leading to a risks such as bumping into physical objects.

Immersive technology use may also affect consumer mental health as a result of the evidenced psychological impacts of prolonged use. More specifically, the literature suggests that users could experience disassociation and depersonalisation due to prolonged engagement in ISWs.[130]

As reflected both in the literature and by workshop participants,[131] ISWs could encourage addiction in users because of the heightened immersion compared with other digital mediums, with research suggesting that the level of immersiveness or feelings of embodiment in immersive technologies may be a predictor of addictive tendencies among users.[132] This could be a result of data-driven personalisation, which big technology companies can implement to a greater extent given their existing access to user data.

These risks could be exacerbated in situations where the user is dependent on the technology, such as people who depend on it as their primary means of social connection and community participation.

Users may also be targeted for intentional harm by other users in ISWs, which may impact their physical and mental health. An example is the startling or ‘strobing’ of epileptic or otherwise vulnerable users.[133] Potential mental health impacts include the psychological effects of experiencing cyberstalking and the potential for immersive virtual worlds to exacerbate psychological disorders such as eating disorders and self-harming.[134]

At the same time, as covered below in this section, exposure to harmful content, including homophobic, sexual or stereotypical forms of harassment, may harm users’ mental health.[135]

How ISWs may harm users’ mental health

 

‘This is a legitimate addiction.’ – ISW user

 

ISW users discussed the potential for dependency and addiction on ISWs, with one user going as far as to compare developers of the technology with cigarette companies. This addition could have other mental health impacts, such as reducing interaction with people, experiences and responsibilities in the physical world.

 

Aligned with this, ISW users selected ‘potential overindulgence’ as one of their six prioritised impacts. In drafting their ‘principle for change’ for this impact, they stressed that policymakers should understand that platform incentives are not designed with users’ health in mind, and that platforms should not be allowed to ‘profit from users’ suffering’.

Similarly, businesses’ adoption of immersive technologies may create new forms of physical strain such as general discomfort, neck strain and cyber-sickness.[136] This may be aggravated where immersive technologies become essential to conducting job tasks, requiring extensive use throughout the day.[137]

Businesses’ use of immersive technologies in safety-critical contexts may make workers vulnerable to accidents caused by technological malfunctions.[138] This may also bring about new risks for companies, given their responsibility for workplace injuries under statutory duties of care, potentially exposing them to significant compensation claims and regulatory penalties if they fail to adequately assess and mitigate the unique health and safety risks posed by immersive technologies.

As noted by warehouse workers, the increased surveillance and monitoring by employers through adoption of immersive technologies may also negatively impact mental health among workers by reducing interpersonal connection, feelings of agency and freedom in the workplace and inhibiting self-expression and communication due to fears about how personal data might be used outside of the use case context. This could also have an effect on company performance as lower levels of employee wellbeing result in lower productivity levels.[139]

How immersive technologies may increase health and safety hazards in workplaces

 

‘Accidents happen mostly because of a lack of training and no map or directions.’ –Warehouse worker

 

Warehouse workers expressed concerns around risks to health and safety that could result from augmenting their tasks with information overlay. They noted that workers may be distracted by visual displays, that the physical strain of using devices may result in mistakes and that challenges in learning how to use the technology may result in safety hazards.

 

These concerns were contextualised by factors within their work environment, described as large, crowded, busy and noisy, with large numbers of workers and moving vehicles on site, and by the fast-paced and high-pressure nature of their work.

Harmful content

How could immersive technologies expose users to harmful digital content or behaviour (such as misinformation, harassment, fraud, illegal content or behaviour)?

Consumers may become exposed to harmful content and behaviour in immersive virtual worlds which could impact their mental wellbeing. This could include exposure to harmful platform designs, harmful behaviours enacted by other users, bullying and harassment, and financial crime.

Many of these harms are exacerbated by the forms of interaction facilitated by ISWs, which more closely resemble in-person interactions than traditional forms of social media. With unrestricted access to ISWs and little or no age verification, these risks may be especially prominent for children.[140] Lastly, immersive virtual worlds pose particular challenges to mitigating harmful content and behaviour.[141]

Some interview participants discussed IVWs and immersive games that employ stereotypical and sometimes sexually degrading portrayals of people, including women, children, people from minoritised ethnic groups and people from religious groups.[142]

For example, one participant discussed a past trend in games where ‘Arabs were the bad guys’.[143] This is reflected in the literature: gender,[144] racial[145] and religious[146] stereotypes have been shown to be consistently prominent in video games and in the video game community.

Workshop participants also discussed how, even on platforms that are not designed in harmful or discriminatory ways, users may still be exposed to such content through involvement with communities that do enact harmful situations.

Within virtual environments, users may directly experience harassment, hate speech, sexual abuse and cyberbullying.[147] Both participants and ISW users from the first case study workshop discussed how these forms of abuse were disproportionally targeted at marginalised and/or vulnerable groups. As illustrated by one participant: ‘If you don’t have strong protections against harassment or bullying on the platform, then that can disproportionately affect certain […] groups.’[148]

These forms of harmful content and behaviours can also be found in traditional forms of social media. However, within immersive environments, interactions more closely resemble in-person interactions and mental health harms may be felt more vividly by users.

Workshop participants described how immersive social worlds may facilitate new kinds of harassment, bullying and defamation that are more physical and interactive than those facilitated through textual means, and that can have a greater impact as a result.[149]

For example, users in the metaverse can be stalked, blocked by other avatars, or virtually approached and harassed by other avatars.[150] Future developments such as haptic technology, which allows users to feel vibrations through controllers or wearables in immersive virtual spaces, may heighten the potential for harassment in ISWs by adding physical sensation.[151]

The extent to which immersive experiences may elicit embodied somatic responses was well illustrated by one participant who described an immersive experience where ‘you go like hundreds of flights in a lift and you step out and there’s a plank into the abyss. And there’s this sense fight or flight can kick in, and I’ve seen people, you know, experience like terror with it.’[152]

Participants also discussed how some advanced forms of wearable technologies, such as data suits, may enable situations where instances of assault in ISWs may result in users feeling physical distress and pain, further narrowing the gap between physical and immersive experiences of physical and sexual assault.[153]

The embodied nature of immersive experiences may also exacerbate the risk of harmful socialisation in online environments, which children are particularly vulnerable to. Participants described how the design of immersive environments may encourage children into extremist behaviour[154] and affect their development towards discriminatory attitudes.[155]

The distinct risks of harmful socialisation posed by ISWs were in part attributed to their playful nature, facilitated by embodied interactions. As illustrated by one participant:

‘It can deepen your biases against certain groups subconsciously. I think play is particularly very powerful because it takes you off guard because it appears spontaneous and it’s allowed to be experimental. That’s the whole idea … so if you’re immersed in it, you’re supposed to give a longer rope, which can go both ways.’[156]

Similarly, embodied interactions facilitate forms of relational intimacy that may not be possible through traditional social media. One participant, for example, reported that their child had described their first kiss as being a kiss that they had with another avatar in an immersive world.[157]

ISWs may make users, especially children, particularly vulnerable to having experiences and relationships with other users that encourage them to adopt harmful beliefs and behaviours:

‘Kids, without leaving their house, can kind of be involved in all kinds of crimes or kind of [learn] what it would be like to commit a crime in the physical world. And so forth or be exposed to all kinds [of] environments that [they] would have restricted access [to] in the physical world.’[158]

Emerging consumer use cases in immersive digital marketplaces (such as Decentraland[159]), where users can trade and buy assets including virtual real estate, goods and NFTs, may open new avenues for financial crime including cyber-attacks, intellectual property violations and unclear regulatory compliance for digital assets.

These scams may involve digital assets, such as avatar skins, in-game items or virtual real estate, which are often purchased with user money. As a result, these digital assets and marketplaces are vulnerable to scams such as code exploits (techniques that abuse software vulnerabilities to compromise system security), wash trading (artificially inflating trading volume by buying and selling the same asset to oneself), money laundering, ‘middleman scams’ and more.[160]

As touched on by one participant,

‘[ISWs] come with their own sets of risks and potential negative implications for in the physical world, including buying virtual property and losing all its value.’[161]

The harms in immersive technologies pose several regulatory challenges. ISWs suffer from targeted harassment and biased moderation, whereby harm disproportionately falls on marginalised people.

This was reflected in interviews for this report:

‘Harms are worse for marginalised people: the harms … [are] targeted primarily at … women, people of colour, LGBTQIA+ people.’[162]

In addition, age verification remains challenging, with difficulties both in verifying users’ age and in establishing liability among app providers, platform creators and product designers for false age reporting. As one participant stated:

‘How do you know if somebody is the age they claim to be? … Who has responsibility over that? Is it the operating system level? Is it at the individual app level?’[163]

One challenge is that enforcing strict verification practices, such as requiring IDs, may simultaneously hinder users’ freedom of expression – one of the qualities which is most celebrated in immersive technologies such as ISWs.[164]

Companies’ adoption of immersive technologies may expose their employees to new forms of harmful content and harassment by colleagues in the workplace, through information overlay, voice messages or text-based notifications.

Finally, real-time moderation of user interactions and exchanges is another prevalent challenge in the context of immersive technologies. As stated by one participant:

‘Everything’s real time and there’s not … as much of an opportunity for content moderation.’[165]

This challenge is echoed in the literature, given the difficulties in moderating live, avatar-to-avatar interactions in ISWs and in navigating content moderation practices in cross-border virtual environments.[166]

How ISWs may expose users to harassment and inappropriate behaviour

 

‘The responsibility should be on the government, not just parents.’ – ISW user

 

ISW users discussed harmful content within ISWs as a significant concern. They highlighted how the anonymity enabled through ISWs, while it supports freedom, self-expression and interpersonal connections, also enables harmful behaviours.

 

Harassment and discrimination, the promotion of harmful behaviour, exposure to other users’ traumatic experiences, and children being exposed to inappropriate content were discussed as being common in these environments.

 

Accordingly, ISW users selected ‘bullying, and young people being exposed to inappropriate content’ as one of their six impact priorities. They highlighted that young people, women, minoritised ethnic groups, religious groups, LGBQTIA+ people, neurodivergent people and anyone with protected characteristics under the UK’s Equality Act 2010 may be particularly at risk of bullying in ISWs.

 

To mitigate this, they noted targeted approaches that governments could take, such as preventing underage users from accessing 18+ communities and greater levels of education around bullying and VR in schools.

Equality

How could immersive technologies impact people’s fair and equal treatment?

Immersive technologies could promote greater accessibility and new forums for marginalised groups to express themselves, by providing experiences and services which may otherwise be inaccessible, and supporting community building and assembly.

However, accessibility issues and the potential of algorithmic profiling, in terms both of existing practices and of its increasing convergence with generative AI, could result in unequal harms and even discriminatory treatment.

Potential benefits

Immersive technologies have the potential to increase access to experience-based services and events which could otherwise be out of reach for some people, such as those with limited mobility. These can range from museums,[167] to immersive therapy,[168] to protests or cultural events.[169]

As long as the required technology is financially accessible, innovations such as these could allow more equitable access to services. This benefit was expressed by several participants, with one stating:

‘In terms of accessibility … having the opportunity of being given a lecture by … [a] world-class professor in Singapore without me having to go there, without having to … pay the tuition for that school, without having to travel there, without having to actually … settle and live there, all of that sharing of information is a huge benefit.’[170]

AR and VR technologies can also provide new use cases for increasing accessibility, including alternative ways to access and digest information and interact with digital environments, potentially allowing disabled users to act more independently.[171] Notable examples include AR-enabled real-time sign language interpretation.[172]

Potential risks

Product design may result in certain biases, with users from some backgrounds being able to use products with greater ease as a result of their greater representation throughout the development process.[173] For example, many VR headsets are not designed with users with involuntary body or eye movements in mind,[174] reflected in interviews, with one participant stating: ‘Some people just aren’t built for using VR.’[175]

Aside from physical accessibility issues, immersive technologies’ financial cost, long set-up process and tendency to tire the user are also accessibility bottlenecks which are yet to be addressed, and which were raised by ISW users in our workshop.[176]

Immersive technology hardware and software can also disproportionately cause unintended consequences for some disabled people, such as sensory overload, cyber-sickness and fatigue. These issues also translate to blind or low-vision users, who, alongside deaf users, suffer from limited accessibility when using many devices because of a lack of functions such as subtitles.

Biases may also occur at an algorithmic level if AI algorithms used in immersive technologies inherit biases present in their dataset.[177] This risk was also emphasised by workshop participants.[178] While these biases can affect a wide range of immersive technology functionalities, they may pose a particular risk to content moderation.

For example, existing biases in social media moderation, such as a greater tendency to flag users from marginalised communities, run the risk of being passed on to immersive spaces.[179] While this is a concern in any technology which uses recommendation algorithms – because of the vulnerability of algorithms to inheriting biases in their training data – it is particularly prominent in immersive technologies due to their increasing convergence with generative AI technology, which has also been shown to perpetuate and reinforce stereotypes and biases.[180]

This could be particularly concerning in the context of ISWs, given the use of generative AI models for synthetic environment, avatar and content development. For example, recent investigations found that OpenAI’s image generator ‘Sora’ exhibits significant stereotypes in relation to professional roles, racial and ableist biases in character representation, and systematic exclusion of diverse body types, all of which could be amplified when transferred to the generation of immersive content such as avatars, environments and immersive experiences.[181]

Broadly speaking, the adoption of immersive technologies may exacerbate the digital divide in certain services and consumer contexts. More specifically, access to immersive technology-powered services may be limited for those with lower digital literacy, or without access to these technologies at home due to their high cost.

This may also be a result of exclusive design practices, whereby immersive technology hardware is not suitable for users who are already at a disadvantage in the workforce, such as those with physical disabilities. As a result, users who are already in advantageous positions, such as those with high digital literacy, greater disposable income and more technical resources, are more likely to experience the benefits of immersive technologies.

How ISWs may increase the digital divide

 

‘Those with less money can’t afford a more modern set-up, which creates a digital divide.’ – ISW user

 

ISW users discussed impacts related to equality which revolved around the lack of accessibility of immersive social worlds. They touched on the type and nature of the disparate vulnerabilities that marginalised and/or vulnerable groups face across identified key impact categories.

 

They discussed the fact that ISWs are financially inaccessible for many people and also noted issues with inaccessible hardware design. In alignment with these concerns, they selected ‘accessibility and the digital divide’ as one of their six key impacts.

 

Within the ‘principle for change’ drafted for this impact, ISW users noted how low-income households, individuals living in areas without broadband infrastructure and those in low-income countries may face the most barriers to access, resulting in the digital divide growing wider.

 

ISW users noted that accessibility issues could be addressed by investing in broadband infrastructure, ensuring the right to repair devices, placing limits on costs, and initiatives aimed at making devices accessible to groups that would most benefit from them.

 

They also highlighted that policies should be developed through public participation to identify affected groups and uncover barriers. Concerns around inequality and discrimination were also discussed under other identified key impacts, with marginalised groups highlighted as disproportionately vulnerable to harms related to ISWs and subject to discrimination within these environments.

Business adoption of immersive technologies could exacerbate existing inequalities in service provision, as certain services could become less accessible for certain groups. Currently, immersive technology products are expensive for consumers and often require high levels of digital literacy and the physical capacity to use these technologies.

The provision of services through immersive platforms, such as South Korea’s recent release of a virtual municipality for government services,[182] may exacerbate existing inequalities in access to health, education or financial services, by excluding those who may not be able to afford the devices required, who do not know how to use them or who do not have the physical capacity to use them.

Risks for consumers related to accessibility of hardware design, alongside unintended consequences of the technology for those with certain disabilities, could also exacerbate workplace inequalities.

Technologies may affect the productivity and performance of workers unequally, with some benefiting more due to factors such as greater digital literacy, better suitability to product ergonomics and lower vulnerability to the cognitive impacts of prolonged immersive technology use.

This may exacerbate existing inequalities in the workforce, by disproportionately negatively impacting workers with certain physical or mental health conditions or lower literacy skills.

This may be compounded by the integration of immersive evaluation: increased performance monitoring and evaluation may result in differences in performance being increasingly noticed and flagged to employers, thus adding to job insecurity for those who already experience accessibility issues.

Similarly, invasive data collection practices may disproportionately harm workers who are already in vulnerable positions, such as those who experience social exclusion or who are more susceptible to being harassed, by disclosing information which may propagate this harassment.

How immersive technologies may exacerbate workplace inequalities and jeopardise marginalised workers’ job security

 

‘There isn’t an option for different languages. If there was, it could be helpful as [my colleague] is still struggling with English.’ – Warehouse worker

 

Warehouse workers touched on how inaccessible design and implementation of immersive technology use cases in a warehouse context may jeopardise marginalised workers’ job security.

 

They discussed concerns such as not all workers being able to use visual overlay devices, including those at risk of epileptic attacks or those who have greater photosensitivity, workers with eyesight problems who would require custom prescription for AR glasses, workers with hairstyles and turbans which the technology does not accommodate, workers whose language is not represented in the technology, and older workers who might have less experience of using technology.

 

These concerns were reflected in two key impact categories voted on by warehouse workers: ‘impacts on communication at work’, where they emphasised that a lack of multilingual options would present challenges for workers who do not speak English; and ‘impacts on targets’, where older workers, workers with different learning skills and disabled workers were highlighted as particularly vulnerable to challenges in using the technology and inadequate support to achieve productivity targets, which could ultimately result in job losses.

 

Warehouse workers noted that accessibility and user-friendliness in the design of these devices should be a priority, and that the technology is fair to all workers. Furthermore, they stressed the need for practical forms of support in adopting the technology, such as inclusive training.

 

Concerns about inequality and discrimination were also discussed under other key impacts: marginalised and/or vulnerable groups were highlighted as disproportionately vulnerable to harms (for example, as a result of not having appropriate health and safety training) and subject to discrimination and harassment.

Environmental impacts

How could immersive technologies impact the environment?

Immersive technologies are increasingly being used to generate awareness of environmental impacts and promote pro-environmental behaviour. The implementation of immersive technologies at scale could also reduce environmentally costly behaviours, such as commuting to work. However, the high-energy consumption of the technology and the short lifespan of hardware products pose significant environmental risks.

Potential benefits

The literature suggests that immersive technologies could be effective at increasing awareness of climate change. For example, a VR experience of an iceberg melting in Switzerland over the course of 220 years was found to increase environmental awareness.[183]

A VR experience where users navigate an inundated urban environment and interact with objects was found to improve urban planners’ awareness of the effects of climate change.[184] However, the evidence is sparse as to whether this increased awareness translates to changed behaviours.[185]

Through providing services online, using immersive technologies such as IVWs, certain use cases could help to cut down pollution-generating activities. For example, the implementation of IVWs for team collaboration and communication in recruitment company RippleMatch allowed the company to remove in-person requirements for work, reducing the emissions which would have been caused by employees commuting.[186]

This benefit was also mentioned by participants, one of whom stated:

‘I think [the] majority of people have never even been on an airplane, right? … And so, if you can give someone that experience of being in another place and connecting with other people … that would be good for the environment too.’[187]

Mass implementation of immersive technologies by businesses may also help companies to cut down on energy consumption through the use of digital twins to optimise product manufacturing processes.

For example, Renault expects to reduce the carbon footprint of its vehicle production by 50 per cent through the integration of digital twins in IVWs,[188] allowing experimentation with vehicle design virtually without having to manufacture pieces. Similarly, Pepsi hopes that it will be able to improve the efficiency and environmental sustainability of its supply chain by experimenting with digital twins of its factories.[189]

Potential risks

Large-scale IVWs, a particularly popular use case for consumers, are very energy intensive. IVW energy consumption is many times higher than that of a standard web server and has a significantly higher carbon footprint.[190]

The short lifespan of immersive technology hardware may contribute to greater e-waste. Commercial headsets such as the Meta Quest 3 have a short lifespan due to their tendency to quickly become obsolete as a result of rapid innovation in the market.

For example, while Sony will be phasing out support for the PS4 after 12 years,[191] Meta began cutting support for the original Meta Quest headset after just five years.[192] This leads to more production (requiring significant amounts of energy consumption and rare minerals) and more e-waste. These environmental concerns were also shared by workshop participants.[193]

Mass adoption of immersive technologies by businesses could result in greater data storage requirements, and investment in energy-intensive infrastructure to support both the running of these devices and the storage and management of the data they collect.[194]

How ISWs may harm the environment

 

‘Older headsets are more obsolete – not good for the environment. They are repairable but companies don’t want you to repair them.’ – ISW user

 

ISW users pointed out the tendency for immersive technology hardware to become obsolete relatively quickly, and the consequent electronic waste produced. They commented that this is exacerbated by the difficulty in repairing hardware. They also raised concerns about the significant energy consumption of virtual social worlds.

Governance recommendations

In this section, we present key recommendations for the governance of immersive technologies. We emphasise the need for UK government funding to be directed towards regulators that are best positioned to address common drivers of the impacts of immersive technologies, and regulators that are responsible for high-impact sectors with use cases that either augment safety-critical tasks or directly impact vulnerable groups.

To develop pre-emptive governance approaches, regulators should continue to monitor developments in immersive technologies and evaluate the feasibility of immersive technologies reaching wider adoption levels. This can be achieved through monitoring the bottlenecks to broad-scale adoption that were identified in Reality check, the preceding report in this project’s series.[195]

At the same time, a variety of immersive technology use cases are currently being deployed in high-impact industries to augment safety-critical tasks, many of which impact vulnerable users. The high potential for risk of these use cases warrants regulatory action in the present.

For this reason, we urge regulators to transition from solely monitoring developments to actively addressing current use cases through targeted guidance. This requires revisions to existing horizontal regulation to address common drivers of impacts, as well as sector-specific guidance for particular use cases, prioritising those in high-impact sectors, which augment safety-critical tasks and which have direct impacts on vulnerable groups.

A sector-specific approach requires collaboration between horizontal and sector-specific regulators, alongside meaningful engagement with communities directly impacted by particular use cases, to understand the range of nuanced, contextual risks across different deployment scenarios.

The recommendations below are the culmination of this research project, which has explored key concepts and definitions of immersive technologies, key trends in the product and regulatory landscape, and the benefits and risks arising from these technologies.

They derive from our iterative, multi-methods approach which blends expertise in the field and current literature on immersive technologies with case studies drawing on people’s lived experiences of the technologies. Significantly, they consider the experiences of warehouse workers and ISW users, who are ‘experts by experience’ with in-depth knowledge of the contexts of deployment and how impacts may play out in practice.

Recommendation 1: Increase UK government funding for the regulation of emerging technologies

As discussed in the second report of this research project, immersive technologies have significant convergences with AI systems, as AI – and generative AI specifically – is increasingly integrated into immersive use cases. This integration exacerbates the existing risks of each of these technologies and also creates new risks, such as fuelling user addiction through personalised generative AI content.

Furthermore, as with immersive technology use cases more broadly, many use cases that bring together these technologies are found in high-impact industries, augment safety-critical tasks and affect vulnerable groups such as children and patients – factors that create significant potential for risk. These factors spur the need for relevant regulators to revisit or provide new guidance to mitigate risks, and the requirement for additional funding.

The government’s latest spending review allocated £2 billion to implementing the AI Opportunities Action Plan. This plan includes a commitment to supporting the UK’s global leadership on AI safety and governance via the AI Security Institute and a proportionate, flexible regulatory approach.

The plan emphasises enabling safe and trusted AI development and adoption through regulation, safety and assurance as a key approach, helping ‘to lay the foundations to enable AI’, and the government has made clear the important role of regulators in supporting innovation and the duty to protect UK citizens from risks presented by AI.[196]

It is currently unclear how much of the budget supporting this plan will be directed towards regulators. We recommend that a meaningful portion is allocated to empowering regulators to fulfil their duties in regulating AI, strategically prioritising the regulators best positioned to address immersive technologies, as outlined below:

  • Regulators positioned to address common drivers of impacts of immersive technologies. We have highlighted common drivers that shape the impacts of immersive technologies, including the technical design of immersive technologies, data collection and processing practices, dynamics in the immersive technology product market and the incentives for businesses to adopt these technologies. These drivers have cascading effects across impact categories and addressing them in regulation and guidance would therefore mitigate risks across impact categories. The following horizontal regulators are best positioned to address these drivers:
    • The Competition and Markets Authority (CMA) is most suitably positioned among regulatory authorities to confront the market dynamics of immersive technology products using the Digital Markets Act, particularly with respect to competition concerns and big technology companies holding dominant positions.
    • The Information Commissioner’s Office (ICO) is the most appropriate authority to oversee developer practices in data collection and management (particularly the collection and processing of biometric data), given its role in enforcing the GDPR (General Data Protection Regulation) in the UK.
    • The Office for Product Safety Standards (OPSS) is noted in the UK government’s ‘Response to Product Safety Review’[197] as relevant in addressing novel harms of immersive technologies. This includes safety and inaccessibility risks stemming from hardware limitations. The government also suggested that the OPSS work with industry to ensure product safety. This is also relevant to sector-specific regulators, which may need to address use-case-specific risks emerging from the specialised functions of immersive technologies, such as challenges to content and behavioural regulation that arise from multi-user embodied digital experiences — which require attention from Ofcom, for example. This is discussed further below.
    • The Health and Safety Executive (HSE), as the authority responsible for enforcing the Health and Safety at Work Act, is best positioned to address how business incentives to adopt immersive products may underpin risks for workers.
  • Sector-specific regulators responsible for high-impact sectors with use cases that either augment safety-critical tasks or directly impact vulnerable groups. Our research demonstrates that horizontal regulation does not provide sufficient specificity to address the nuanced risks that arise from specialised use cases of immersive technologies. Guidance should address individual use cases directly. As illustrated by our case studies, the impacts of immersive technologies are nuanced and particular to individual use cases. Impacts are shaped by contextual factors such as sector of deployment, the task being augmented and the circumstances of impacted people. Addressing use cases with attention to and knowledge of distinct contextual factors requires guidance from not only horizontal regulators with a clear digital remit, but also regulators responsible for sectors where specific use cases are being deployed. Sector-specific regulators may provide sufficiently specific guidance to address the nuanced risks of individual use cases precisely. The following regulators are responsible for sectors where current use cases have a high potential for risk and therefore should be prioritised:
    • Education regulators and policymakers: The formal education sector has adopted immersive technologies to augment student learning and evaluation. These use cases have significant impacts on children and young people that can be addressed by the Office for Students, the Office for Standards in Education, Children’s Services and Skills in England (OFSTED), and Department for Education.
    • Healthcare regulators: The healthcare sector has adopted immersive technologies to augment practitioner decision-making, therapeutic treatment, surgical procedures and patient diagnosis. These use cases augment safety-critical tasks and may be addressed by the Medicines and Healthcare Products Regulatory Agency and the General Medical Association.
    • Military/security/policing oversight or standard bodies: Immersive technologies have been adopted to augment safety decision-making and military attacks that may be addressed by the Ministry of Defence, College of Policing and the National Police Chiefs’ Council.
    • Transport regulators: Immersive technologies have been adopted to augment the safety-critical tasks of driver decision-making and decision-making in car design and manufacturing – use cases that may be addressed by the Vehicle Certification Agency.
    • Gaming/entertainment/social media regulators (Ofcom): Immersive technologies have been adopted to augment social interactions and gaming. Given the multi-user embodied digital interactions facilitated by immersive communication and collaboration, there are significant risks related to online harms and multi-user behavioural moderation. These use cases have significant overlap with children and may be addressed by Ofcom.

Recommendation 2: Regulators and policymakers should continue actively monitoring developments in immersive technologies, with a focus on bottlenecks to adoption

Developers of immersive technologies have put forward a vision of general-purpose technologies with multiple functions which have the potential to have a long-standing impact on society through multi-sector integration. While this vision has not been achieved, and adoption has instead taken place for specialised use cases, regulators and policymakers may benefit from monitoring developments relating to bottlenecks to broad-scale adoption, to continually evaluate the feasibility of immersive technologies reaching wider adoption levels, and develop pre-emptive governance approaches.

These bottlenecks are extensively discussed in Reality check, the second paper in this research project.[198] They include technical limitations, lack of public trust in the technology, public preference for incumbent technologies and a decline in investment.

While they are seemingly not responsible for monitoring the development of immersive technologies specifically, various governmental and regulatory teams such as the Department for Science, Innovation and Technology’s work alongside the Government Office for Science and its Future, Foresight and Emerging Technologies’ team,[199] and sector-specific regulators’ foresight and monitoring teams such as the Financial Conduct Authority’s emerging technology research hub,[200] should pay increased attention to immersive technologies and its bottlenecks to establish pre-emptive regulatory strategies. Such work is already underway, with notable examples including the policy and horizon spanning teams in the Digital Regulation Cooperation Forum (DRCF) and its immersive technology foresight paper.[201]

Recommendation 3: Horizontal regulators should provide targeted guidance with specialised consideration of immersive technologies and collaborate with sector-specific peers

While monitoring developments in the adoption of immersive technologies may inform a preventative approach, specialised use cases that are currently deployed warrant regulatory action in the present. We recommend that regulators urgently address existing uses of immersive technologies through guidance that provides specialised consideration, including revisions to horizontal guidance to address common drivers of impacts. These drivers have cascading effects and addressing them would therefore mitigate risks across these categories:

  • Dynamics in the immersive technology market shape the development of immersive products, informing how these technologies are commercialised and the business models behind their design. Some of the most significant risks arising from this relate to the ability of dominant big technology companies to consolidate products, power and influence. These dynamics raise concerns about anti-competitive practices. Combined with the data-driven business models of key players, which incentivise invasive data collection practices, they also raise concerns about the mass concentration of biometric data among large players in the space. These issues are of direct relevance to the CMA as they fall under the Digital Markets, Competition and Consumers Act, which aims to create a pro-competition market and gives the CMA the ability to respond quickly and flexibility to developments in these markets and set conduct requirements. To address this driver, the CMA should pay particular attention to anti-competition practices, which fall under its remit of ensuring fair competition under the Act.
  • Drivers such as improved business productivity and greater profit margins lead to numerous impacts on workers. This includes increased worker data collection, with cascading harms such as reduced worker autonomy, chilling effects on behaviour and eroded interpersonal interactions between colleagues, and vulnerability to product malfunctions, which is particularly risky in safety-critical work contexts. These risks fall under the remit of the Health and Safety Act, enforced by the Health and Safety Executive (HSE).[202] However, the Act does not currently address immersive technologies directly. The HSE may need to look to provide specialised guidance that addresses how the Act applies to the workplace implementation of immersive technologies, and how they affect mental and physical wellbeing. For example, to mitigate the risk of new physical and cognitive strains caused by the technology, such as cyber-sickness, the HSE could look to draft guidance detailing how processes such as workstation assessments and Display Screen Equipment Regulations apply to immersive technologies.[203] The risk posed by these technologies, and risks related to data collection and processing in workplace contexts, may also be addressed through procurement standards for immersive products in business contexts. Guidance could also specify how employee training applies to immersive technology use in the workplace[204] and address the potential for adoption to result in dismissals and redundancies.[205]
  • Data collection and processing practices for immersive products collect and process significantly more data than is required to enable technical functionality. This can lead to increased vulnerability to invasive data collection practices, mass concentration of biometric data among dominant tech companies, bystander data collection, chilling effects on freedom of expression and assembly in the workplace, and the erosion of in-person relationships.[206] To address this, privacy and data protection regulators such as the ICO should develop a legislatively backed biometrics guidance framework which would directly apply to the biometric uses and capabilities of immersive technologies. Among other things, this framework should update the definition of biometrics, establish a new regulatory body to oversee and enforce it, and be sector-agnostic, to account for the various implications and use cases of biometric technologies across sectors.
  • The technical design of immersive technologies drives a range of significant impacts, from challenges to adoption for disabled people due to inaccessible design, to safety concerns associated with prologued use. To address this driver and subsequent impacts, the Office for Product Standards and Safety should continue its work to identify the physiological impacts of current VR products and expand the scope of this work to include other forms of immersive technologies and address accessibility issues.[207] Findings from this work should seek to not only inform users of potential adverse impacts but also establish design standards for immersive products across contexts of deployment. Finally, sector-specific regulators should address impacts that the specialised functions offered by immersive technologies pose in particular use cases, such as risks related to multi-user behavioural moderation that arise from embodied digital experiences, which fall under the remit of Ofcom.

Engagement and guidance are required from sector-specific regulators to target particular use cases directly, specifically and in context. Alongside horizontal regulators, sector-specific regulators can contribute to delivering comprehensive intersecting guidance. This sector-specific approach is consistent with the UK government’s Pro-Innovation Approach to AI Regulation white paper.[208]

As in our paper Reality check, the regulatory landscape of key sectors where immersive technologies are currently being deployed lacks specialised consideration of these technologies.[209] Use cases that are not currently addressed include the potential exclusion of immersive technology products that are capable of emotional inference from the medical device regulation enforced by the MHRA, a lack of publicly available and overarching regulation on the use of AI- and ML-based immersive technologies within law enforcement in the UK, and regulatory guidance on the use of information overlays in vehicles to mitigate potential impacts on driver fatigue and strain.

Sector-specific regulators should address specialised use cases within their remit through targeted guidance, prioritising high-impact use cases that augment safety-critical tasks or which directly impact vulnerable groups such as children or patients. In our first recommendation above we highlight regulators responsible for high-impact sectors within which these types of use cases are currently being deployed. Guidance should both consider how horizontal regulation applies to particular use cases and establish sector-specific standards.

Through a combination of horizontal guidance that targets immersive technologies and sector-specific guidance on specialised use cases, regulators may address current gaps. This may be aided by cross-regulatory collaboration. For example, sector-specific bodies responsible for establishing standards for training in their sector, such as the General Medical Council, could collaborate with the HSE to develop guidance on immersive technology training for employees that accounts for factors relevant to their particular sector. A shared understanding of what immersive technologies are would facilitate this process and allow for better-targeted, more effective and well-informed guidance. The first publication in this research project provides regulators and policymakers with a common understanding of immersive technologies to support this cross-regulatory collaboration.[210]

Recommendation 4: Account for use-case specificity through participatory engagement with impacted people and communities

It is crucial for regulators to engage with the people and communities impacted by immersive technologies in order to understand how specific use cases impact people’s lives and to develop a more granular understanding of contextual factors that inform risks.

Participatory engagement and the insights shared by ISW users and warehouse workers were essential to forming the insights of this study. For example, warehouse workers brought to light how the adoption of immersive technologies in the workplace could lead to harassment through the potential disclosure of information to managers or other co-workers, as well as by exacerbating employers’ ability to micromanage workers. This risk was not found in the literature review or shared in the expert interviews.

This underscores how crucial it is for regulators to engage meaningfully with impacted communities, which will vary depending on the context of deployment and the task being augmented. The understanding developed in this way will enable regulators to adequately address these factors and develop effective risk mitigation strategies.

Conclusion

The potential impacts of immersive technologies are numerous: from increased accessibility and improved and safer occupational training, to escalating e-waste and fuelling consumer addiction. We have identified a series of impact categories generally applicable to immersive technologies, alongside benefits and risks which arise from its deployment in different use cases and sectors.

These findings was identified through a multi-methods approach, blending a literature review, expert interviews and engagement with impacted communities. The impacts have been shown to be unique depending on the sector in which the technology is being deployed (for example, healthcare, transport or military) alongside the specialised function (such as information overlay or immersive evaluation).

Crucially, our approach to recognising lived experience has highlighted the important role that impacted communities can have in helping understanding of the potential benefits and risks of immersive technologies, and therefore in their governance.

While evaluating impacts, participants in our workshops raised concerns about the benefits and risks of each use case. Some of these were distinct from those found through our literature review and expert interviews, and all of them provided a level of granularity that grounded our analysis of the impacts in real-world contexts.

Our research also uncovered differences in who benefits from and who is disadvantaged by immersive technology applications, depending on the context in which the technology is adopted. This emphasises the need for nuanced and sector-specific approaches in order to successfully mitigate the risks posed by the technology and maximise its potential benefits.

To tackle the various impacts that immersive technologies could have within and across sectors, horizontal and vertical regulators must revise regulatory approaches and work together to mitigate the harms.

Horizontal regulators will have to consider current regulatory gaps and produce guidance to tackle the distinct harms which immersive technologies present. Vertical regulators need to both investigate how existing horizontal regulation can be applied within their sector, and explore how sector-specific gaps can be addressed.

Recent technological advancements have brought the transformative potential of immersive technologies closer to reality than ever before. Yet with this promise comes both unprecedented and exacerbated risks, from market concentration and environmental degradation to threats to human autonomy.

The path forward requires more than technical innovation: it demands new approaches to governing these technologies. Only through meaningful cross-sectoral collaboration and genuine public engagement can we navigate these challenges successfully and unlock the full potential of immersive technologies: from new forms of creativity and interpersonal connection to enhanced accessibility tools, safer training environments, and use cases that could fundamentally transform how we work, learn and relate to one another.

Methodology

This paper expands on current understandings of the potential societal impacts of immersive technologies based on an iterative multi-methods research design grounded in a literature review, expert interviews and case studies with impacted communities.

In this research, we consider ‘impacted communities’ as groups of individuals who currently, or who are likely to, experience impacts from immersive technologies, either by choice or through social placement. By engaging with these communities, we were able to develop a better, more granular understanding of the range and nature of the impacts of immersive technology use cases.

Our approach differentiates itself from those of other studies by the expansive ways in which it incorporates lived experience. It is relatively common for researchers to investigate lived experiences of particular use cases when considering impacts of immersive technologies in particular social domains.

But a range of immersive technology use cases and the experiences of impacted people remained underexplored. The case study approach used for this research, where we test the veracity of broader understandings of impacts within specific use cases, sought to address this gap.

Case studies can surface the granularity of evidence necessary to understand how different immersive technology use cases play out; they can help us grasp a wider range of contexts of deployment, augmented tasks, perspectives and impacts that immersive technologies present to different people.

Below we describe the research methods in more detail.

Literature review

Our literature review was conducted between November 2023 and May 2024. It elicited evidence from annotated articles on immersive technologies and their social impacts. Through this review, we identified priority areas and evidence gaps in relation to how impacts are located and understood. We continued to refer to relevant literature throughout the research process.

Expert interviews

The project team conducted semi-structured interviews with 26 experts working with immersive technologies, including developers, investors, academics and practitioners, who provided diverse perspectives. Interview questions covered the following categories relevant to our project research questions: the timeline of immersive technologies, the product landscape, the technical components and impacts of immersive technologies such as risks and benefits.

Impacted community case studies

The impacted community case studies used deliberative techniques to engage communities impacted by two immersive technology use cases alongside subject matter experts in two workshops informed by established impact assessment frameworks. We developed these case studies in partnership with Involve, a public participation charity.

In each case study workshop, participants surfaced, explored and evaluated the impacts of an immersive technology use case based on their lived experience and knowledge of the use cases and contexts of deployment.

The two case studies were:

  • Immersive social worlds (ISWs – immersive virtual worlds focused on facilitating social experiences): We engaged with current ISW users to think through risks and opportunities in virtual social worlds for different users, and how these may be heightened by emerging technologies.
  • Augmented reality (AR) for warehouse operations: This focused on engaging current warehouse workers to think through the use of information overlay AR glasses to augment operations in manufacturing and logistics sectors.

These use cases are currently being deployed, meaning that evaluating their impacts is likely to provide evidence of existing benefits and harms. Together, they give insight into business and consumer deployment contexts. Even where participants had not directly experienced these technologies, their lived experience and knowledge of the context of deployment provided a basis on which to anticipate potential impacts.

How we engaged case study participants to think about impacts

The design of both case study workshops was informed by the categories of impact identified through our other research methods at the time of designing the workshops (see activities 2 and 3 in the Appendix). Early in the workshop, impact categories were shared with attendees for consideration and reference. Throughout the workshop, subject matter experts shared presentations exploring each of these categories. ISW users and warehouse workers considered these and reflected on their own experiences to identify benefits and risks that each use case might bring about for them, other individuals and society more broadly. More information on how categories were established can be found below in the section ‘How the categories of impact were identified’.

In addition, processes within established impact assessment frameworks were adapted to support the workshop design. Impact assessment frameworks are procedures for conducting and documenting the collaborative evaluation and reflective anticipation of possible benefits and harms of technologies.[211] Components of two frameworks, the UK government’s Official Public Sector Guidance on AI Ethics and Safety and the Council of Europe’s HUDERIA tool,[212] were adapted to support workshop activities.

For example, ISW users and warehouse workers were asked to consider how each use case might impact individuals differently by considering impacts on multiple pen portraits of hypothetical personas. The varying characteristics selected for the pen portraits were informed by the stakeholder analysis process in the UK government’s Official Public Sector Guidance on AI Ethics and Safety, which highlights demographic and circumstantial characteristics as factors that inform individuals’ relative vulnerability to the impacts of technology.

Once participants had undergone multiple iterations of identifying impacts, they were asked to prioritise them. To support this, they were presented with adapted factors used in the HUDERIA framework to determine the degree of severity of potential risks of AI systems.[213] In the workshop, these factors were simplified and tailored to address both potential harms and potential risks, including evaluating the severity of potential risks based on how serious, widespread and reversible a potential harm could be and (in the case of benefits) how widespread a benefit could be and how much it could advance the wellbeing of those impacted.

How the categories of impact were identified

Each of the methods illustrated above were used to define and refine the impact categories discussed in this paper. Initial impact themes were derived from our literature review and expert interviews.

To establish preliminary categories, we considered categories of impact within existing and well-established impact assessment frameworks for other emerging technologies, including the Official Public Sector Guidance on AI Ethics and Safety[214] and the HUDERIA tool mentioned above.[215]

Most themes (including ‘human autonomy’ and ‘privacy and data protection’) identified through the interviews and literature review fit within categories used in established frameworks. Those that did not (such as ‘harmful content’, and ‘market, economy and innovation’) were treated as distinct new categories.

The preliminary impact categories derived from this process were used to inform our design of the deliberative workshops. The workshops brought to light impacts that were not apparent through our prior research and helped refine our impact categories. The insights from ISW users and warehouse workers allowed for a more accurate depiction of how impacts may play out in practice.

The final list of categories is a result of this iterative multi-methods process.

Acknowledgements

This report was authored by Cami Rincon, Mahi Hardalupas, Jorge Perez and Hannah Claus, with substantive contributions from Michael Birtwistle.

We would like to thank Emmie Hine for taking the time to review earlier drafts of this report, and for her comments and contributions.

We are grateful for the work by our external partner Involve, who worked in close collaboration with the research team to design, execute and analyse the deliberative workshops conducted as part of our case study methodology.

This project was made possible by a grant from the Minderoo Foundation as part of its XR30 Fund programme.

Appendix: How we conducted our case study research

In this section, we present the process and outcomes of the two deliberative workshops conducted in partnership with Involve.

Case study one: Immersive social worlds

The first workshop focused on immersive social worlds (ISWs). ISWs are digital 3D environments where users can interact with their digital surroundings and other users as avatars. ISWs can be accessed through traditional hardware such as laptops, as well as through VR headsets, which in turn makes these environments more immersive. Examples include Meta’s Horizon Worlds and Second Life.

Activity 1: Scene setting

Prior to the workshop, participants were invited to a group introduction meeting held online. The purpose of this meeting was to set the scene: for participants to understand the remit of the workshop, build relationships and confidence, and agree a set of conversation guidelines and shared language for the workshop. Participants were provided with a presentation explaining what immersive technologies are and had an opportunity to ask questions about the content and the process.

To support deliberation about the impacts of ISWs, participants were given a simplified description of a hypothetical ISW application resembling real-world applications. This was based on the literature review and interviews to serve as a starting point for conversations that participants could reference during the workshop, while also drawing on their own experience and knowledge of virtual social worlds more broadly. An overview of the hypothetical application can be found below.

Application overview

The purpose of the application is to improve users’ experience of connecting with others and build community. It also seeks to promote freedom and self-expression. In this virtual environment, users can simulate real-life social interactions and activities with other users internationally, from their own home. This application allows users to customise their own avatars, build, sell and purchase virtual objects, and create private environments.

Application functionalities
  • Social connections and community building: An audio feature allows users to focus on conversations around them within the virtual environment and join in using microphones built into the headsets. Users can navigate through themed environments using a map or menu to fast-travel to specific locations, such as a virtual city, park or centre. They can meet other users in specific groups, sign up for virtual events and conferences, and organise social gatherings.
  • Creativity and self-expression: Users customise their avatars by selecting from pre-set options for body appearance and outfits. They can build, purchase and sell virtual objects. Users can use these objects to personalise private environments that they can build and invite other users to.
Technical specifications
  • Hardware: This application can be accessed through laptops, smartphones and VR headsets. The developer company is investing in and promoting VR headsets as the key medium for accessing the virtual environment. The minimum hardware required to access the virtual social world via VR is a VR headset and a pair of hand controllers used to navigate and interact with the virtual environment and other users. Headsets include a camera to track users’ physical environment, a microphone for voice and sound recognition, and motion sensors to process users’ movements. To further enhance the immersive experience, users can add more wearables, such as trackers on their wrists and ankles for more accurate motion tracking, or haptic suits to feel the environments and actions more vividly.
  • Data processing: When users interact with the hardware, data points are created which are processed and translated into an action in the virtual social world in real time. This triggers a reaction in the environment, which updates to adapt to this new information. The company’s policy states most user data is not saved due to privacy and storage reasons although in some cases, data may be processed by algorithms to identify behavioural patterns and provide insights for improving the product, as well as for providing personalised advertisements.
Safety and inclusivity
  • To ensure the virtual social world is safe and inclusive for as many users as possible, the company has community guidelines and employs moderators to enforce these guidelines. They also ask for user feedback and collect user data to improve the product.

Finally, significant amount of time in the introduction meeting was dedicated to recognising and proactively addressing the potential power disparities in the workshop: between experts by experience, subject matter experts, Involve and the Ada Lovelace Institute.

Activity 2: Individual impacts

The first activity of the workshop revolved around the impacts that ISWs may have on individuals. Participants were prompted to consider their own experience using virtual social worlds and were asked: ‘What are the main ways you have been personally impacted by this technology?’

A central theme was how these applications have aided participants’ social interactions, with participants noting that factors such as being able to be anonymous online reduces social anxiety, makes it easier to approach people compared with physical environments, and makes it possible to speak about things they would not talk about with friends in their physical environments.

Participants also noted that ISWs can serve as an excuse to socialise with people, and that features offered within these applications, such as games, are a social lubricant. Immersive social worlds were described as providing a place to escape with other people. Some participants, however, noted that they sometimes feel the same anxiety in virtual spaces as they do when meeting people in physical environments.

Similarly, participants discussed how ISWs support them in establishing and maintaining social connections and relationships with individuals and communities. They discussed both how they have made new connections via social worlds (including friends and partners), and how they use applications to spend time with people they know from their physical environments (such as doing virtual activities with family abroad).

Participants discussed how they have formed new friendship groups through applications, and how they have been able to connect with others with shared interests (such as professional careers) that they would not normally socialise with. Through these connections, participants described having a sense of inclusion and finding support structures.

ISWs were described as spaces that allowed participants to manage their presentation, supporting agency in self-expression, such as by enabling them to explore their queer expression or, on the other hand, allowing them to hide personal characteristics such as gender and ethnicity to experience less discrimination virtually than they do in their physical environments. Participants did comment, however, that the anonymity enabled by virtual social worlds can lead to discriminatory behaviour, noting that women have different experiences of virtual spaces.

Participants discussed exposure to harmful content within these applications, noting a high level of toxicity and trolling, the promotion of harmful behaviour such as alcoholism, and exposure to traumatic experiences such as other users’ self-harm. Participants also noted concerns about children being exposed to inappropriate behaviour or content.

Lastly, some participants discussed how ISWs had supported their career and skills development in areas such as programming.

Following this discussion, participants were given a handout containing the categories of potential impacts elicited by the project team as a reference point if needed throughout the workshop.

A subject matter expert then shared a presentation on the impacts of ISWs on the categories of human autonomy, interpersonal connection, and health and safety. The presenter discussed potential benefits in these areas, such as supporting users’ creativity through content creation and world-building, facilitating community building, and supporting users’ wellbeing, for example through at-home applications to support psychological and physical health.

They also discussed risks, including the potential for user manipulation, user reliance on technology companies for social connection, the weakening of in-person social connections, the possibility for users to become addicted to applications, and physiological impacts of constant usage, such as cyber-sickness.

Following the presentation, participants were asked to consider the impacts of ISWs on individuals with different protected characteristics by creating pen portraits. To create these pen portraits, participants were given hypothetical personas with diverse demographic and circumstantial characteristics who use virtual social worlds and were asked to imagine how they might be impacted, for better or for worse, by the technology. Figure 1 provides an example of a pen portrait filled out by participants.

Figure 1: Pen portrait example – ISW workshop

Activity 3: Wider impacts

Following a short break, participants heard presentations from two further subject matter experts. The first presenter focused onprivacy, data protection and equality, with a focus on digital identities. This presenter discussed three different impact categories: privacy, equality and disability.

Discussing privacy, participants were given an overview of three different types (informational, decisional and local), how they could be affected by ISWs through the mass data collection conducted by access systems, the potential for this data to be used for surveillance, and how this could result in chilling effects, inference algorithms that predict and nudge the user in digital environments, and violations of local privacy through monitoring of surroundings and bystander data collection.

Discussing equality, participants were given an overview of protected characteristics in the UK and explained how users can experience inequalities in the way that ISW hardware is designed and within ISWs, with reference to the aforementioned protected characteristics.

Finally, an overview was given on how ISWs could either enhance or diminish accessibility. ISWs can provide users with new and alternative ways to interact with the world, allowing those with accessibility restraints to overcome physical restrictions, but these experiences can be gatekept due to devices not being designed for different physical and cognitive functions.

The second presentation focused on wider systemic impacts, covering the categories of freedom of thought, expression and assembly; online safety; market, economy and innovation; education and training; labour; and the environment.

The second presenter discussed how ISWs may provide opportunities for unique creative expression and assembly through digital avatars and resources. At the same time, ISWs also have the potential to be used for censorship and surveillance through exploitative content moderation policies and by leveraging the data collected on ISW platforms. Following this, the presenter discussed how harmful content, cybercrime and misinformation can be spread in ISWs in the context of the difficulties in governing, poor moderation and the heightened impact of inappropriate content in these spaces given their immersiveness.

They then discussed the economic impacts of ISWs, such as their ability to provide services which could otherwise be out of reach, market size and the risk of anti-competitive practices by big technology companies.

The educational prospects of the technology were also covered, such as ISWs’ ability to provide in-depth and immersive educational experiences, as well as how asymmetrical access to these experiences could widen the digital divide.

After this, the presenter discussed the potential labour benefits and consequences of ISW use in the workplace, such as allowing more exhaustive and detailed safety training and the potential for increased workplace monitoring and privacy violations.

Finally, the environmental impacts of ISWs were covered, including the tendency for immersive technologies to become obsolete and the consequent effect on electronic waste, as well as the energy consumption of ISWs and the hardware required to access them.

Participants then undertook group reflection on the presentations where they identified impacts that feel important to them, mapping these onto an impacts chart (Figure 2) where they were placed in accordance with their timeline (future–now) and impact (negative–positive). At this point in the workshop, there was no limit to the number of impacts that could be placed on this matrix.

Figure 2: Identified impacts – ISWs

Some important benefits shared by participants included benefits related to identity and self-expression, such as facilitating the ability to decide what information about oneself users share with others and to hide protected characteristics as a means to reduce experiences of discrimination, and the ability for users to express their identity and creativity through avatars, voice features and virtual environments.

Participants connected this to benefits related to social connections and relationships, as applications enable them to feel included and belong with people who accept their self-expression, and to escape from physical environments, in virtual spaces where they can feel safe. This was also categorised as a benefit and partially attributed to safety features such as consent controls and moderators. These benefits echoed participants’ descriptions of the benefits of ISWs on their personal social connections, relationships and social interactions.

Benefits to skills and career development were also discussed, with examples including users learning languages through interacting with other users, neurodivergent people learning social skills, and increased educational and training opportunities for users across the globe due to offers in ISWs. These factors were noted as having impacts on career choices and opportunities.

Potential future benefits described by participants included an increase in the financial and physical accessibility of ISWs, contingent on technology improvements that make applications accessible to different bodies and cognitive functions, and factors such as making hardware repairable increasing the affordability of this technology, helping bridge the digital divide.

Existing negative impacts of virtual social worlds identified by participants included a potential for dependency, with users mentioning that virtual worlds could be addictive and may contribute to users being disincentivised to connect with people, have experiences and be present for their responsibilities in the physical environment. Participants discussed how users’ social lives may come to depend on applications and their ability to communicate in person may deteriorate, with concern expressed about a future where users become more introverted and lazy.

Participants pointed out immediate concerns regarding online safety, and particularly exposure to harmful content and behaviour, including the promotion of substance use, discriminatory behaviour and content, trolling and harassment, and inappropriate content and behaviour for children.

Participants described a cultural precedent for users to share details of sensitive topics with other users who may not be equipped to manage this properly. On the other hand, participants expressed current concerns about freedom of expression, namely unfair moderation that may limit creative expression and where user content and behaviour can be misrepresented and lead to users being banned from applications.

Issues of financial and physical accessibility were identified, with participants mentioning that accessing ISWs is very expensive and requires space, which not everyone has. They discussed how these financial barriers overlap with physical barriers to access, providing the example of partially sighted users needing to pay additional funds for lens inserts.

Risks related to privacy and data protection were also identified. Participants discussed how they feel ‘more watched’ and have less privacy in virtual worlds than in physical worlds, and expressed concerns about users’ ability to consent to data sharing, with data sharing practices putting users at risk of discrimination and with biometric data being used for targeted advertising for harmful products. When discussing potential long-term impacts of ISWs, participants raised concerns about the potential for sophisticated surveillance given the detail of the data collected and the monopoly-like structure of the ISW market.

Finally, participants expressed serious concern about the environmental harms that may arise from ISWs, due to the tendency for hardware to become obsolete and difficult to repair, and highlighted future concerns around the amount of electricity and bandwidth these applications require, noting both environmental harm and the potential exacerbation of financial barriers to access and the global digital divide

Activity 4: Prioritising impacts

In the following activity, participants chose six impacts to prioritise from the pool identified in the previous exercise. To help inform this decision, participants received a presentation from a subject matter expert about the rights and protections currently in place to address the impacts of ISWs, as well as potential gaps in these rights and protections.

The presenter discussed six categories of current rights relevant to harms in physical world: human rights, equality rights, data protection and privacy rights, online harms protection rights, liability/criminal law, and disability rights.

The presenter then discussed new considerations brought up by risks in virtual spaces, such as the collection of biometric data to make inferences about users’ emotional and psychological state, activity monitoring within online communities, how these risks can result in exemption from equality and disability laws on protected characteristics, and the potential of this technology to be used to track environmental data of the user, such as the house in which they use the devices.

They touched on how these factors present challenges to existing law, giving the example of the EU Digital Services Act and how there may be difficulties implementing the law to mitigate harmful content in metaverses when such content is shared through user avatars rather than traditional posts.

Following this, the expert reviewed how existing law may not be sufficient for some aspects of ISWs, noting how the translational nature of metaverses and the enforcement of national laws to protect human rights may complicate their implementation, the lack of specificity of consent laws in ISWs, and the ambiguity of how assault in ISWs is legally classified, further restricting the application of liability and criminal law.

Following this presentation, participants heard information about factors to consider when determining what impacts were most important (Figures 3 and 4).

Figure 3: Considerations for prioritising harms

Figure 4: Considerations for prioritising benefits

Participants used these considerations to deliberate about which of the impacts they had come up with so far were the most important. They chose the six most important impacts for them:

  1. Impacts on freedom of expression.
  2. Making new relationships, and maintaining old ones.
  3. Bullying, and young people being exposed to inappropriate content.
  4. Accessibility and the digital divide.
  5. User data being misused and harming vulnerable groups.
  6. Potential overindulgence.

Activity 5: Drafting principles for change

This series of activities allowed participants to explore, discuss and understand the way ISWs work, the impact they have on their and others’ lives, and wider societal impacts. The knowledge and perspectives collated through the previous exercises allowed them to effectively participate in the final activity to produce the main outputs from the workshop – principles for change.

In the final two hours, participants were asked to work together to write a set of principles corresponding to the six most important impacts identified in the previous activity.

They were given a template which included the following sections for them to fill in:

  • the groups that will be the most affected are:
  • policymakers should take into account:
  • at all costs, they should avoid:
  • an ideal scenario would be:

These template sections have also been used below to structure participants’ feedback on principles for change for the identified impacts.

Outputs: Principles for change

Impact 1: Impacts on freedom of expression

‘Moderators are the only people who have actual power.’

Participants placed a strong emphasis on moderators’ unchecked power over other users in ISWs, and how there are issues with moderation.

The groups that will be the most affected are: Participants noted that everyone will be affected by this impact, though minoritised groups who they believed did not usually have a chance to speak for themselves will be even more impacted.

Policymakers should take into account: Participants noted that policymakers should be aware of the strong role that moderators play and of issues with moderators limiting others’ freedom of expression, with biases and with moderators removing other users ‘when they feel like it’.

Participants believed that there should be regulation for what moderators can do, noting that moderators should be humans and not AI, that they should be democratically elected, that there should be pathways for reporting moderators, and that moderators should be trained on existing regulation and how to keep spaces safe for everyone.

Participants also noted that policy should not be broad-stroke and should not exclude some voices. Instead, there should be space for a diversity of opinions and the same rights that apply in the physical world should also apply online. It was also noted that policymakers should take into account cross-country differences in standards of freedom of expression.

At all costs, they should avoid: Participants noted that policymakers should avoid enforcing policy which will prevent freedom of expression, insofar as the content is legal and does not promote hate speech and violence.

An ideal scenario would be: Participants noted that, ideally, ISWs should be an environment where everyone feels comfortable with self-expression of all kinds, excluding harassment, and different groups are able to access unique spaces where they feel comfortable, and that no one is excluded from these spaces. .

Impact 2: Making new relationships, and maintaining old ones

‘These worlds and communities are really important to users.’

Participants emphasised the importance of these virtual worlds and communities to users’ social relationships.

The groups that will be the most affected are: Participants noted that all users are impacted, but particularly those who struggle to maintain real-life relationships, such as those who have social anxiety, are neurodivergent or are introverted.

Policymakers should take into account: Participants noted that policymakers should be aware of the importance of these communities to their users as their main form of communicating and socialising, noting that a significant portion of the user base is vulnerable and relies on these spaces to feel safe.

In accordance with this, attendees noted policies targeted at ISWs should be easy to implement in order to minimise the risk of some platforms being shut down due to failure to comply. They noted that communities need to be protected from potential closures and supported if this happens, through means such as public ownership, allowing the community to maintain their ability to access them.

At all costs, they should avoid: Attendees said that policymakers should avoid taking any action that may cause users to lose connections to others, including worldwide.

An ideal scenario would be: Ideally, these platforms should allow users to connect to anyone through VR.

Impact 3: Bullying, and young people being exposed to inappropriate content

‘The responsibility should be on the government, not just parents.’

Attendees noted the high risk and prevalence of bullying and young people being exposed to inappropriate content in ISWs.

The groups that will be the most affected are: Participants noted children, women, minoritised ethnic groups, religious groups, LGBTQIA+ people, neurodivergent people and anyone with UK protected characteristics as being at most risk from this impact.

Policymakers should take into account: In addressing this principle, participants noted that policymakers should create targeted approaches to most common types of inappropriate content and provide support structures for those bullied and for vulnerable groups. They expressed that policymakers should consider preventing underage users from accessing 18+ content, noting that forms of digital identification could help with this.

When designing policies, participants emphasised, the knock-on consequences should be considered. They emphasised allocating responsibility to the government rather than placing it solely on young people’s parents, noting that parental monitoring can be restrictive and can infringe on children’s privacy. Instead, participants voiced that there should be more education for children and parents around real-life issues, allowing them to understand risks and find guidance on behaviour in virtual spaces.

At all costs, they should avoid: Participants noted that policymakers should avoid ‘putting responsibility in the wrong places’ by antagonising communities, restricting users and blaming them entirely for this issue. They discussed that an overcautious approach of pressuring platforms to change should be avoided, particularly approaches which may restrict platforms in certain jurisdictions.

An ideal scenario would be: Ideally, participants noted that there would be greater education in schools around bullying and VR, as well as support structures for victims of bullying.

There would also be community involvement in policymaking to ensure that freedom of expression is accounted for and not overly restricted, and that any policies for new technology are future-proofed.

Impact 4: Accessibility and the digital divide

‘Those with less money can’t afford a more modern set-up, which creates a digital divide.’

Accessibility across income groups and the cascading effect that access asymmetry could have on the digital divide was a key principle of change considered by participants.

The groups that will be the most affected are: Participants noted that low-income households and countries and areas without broadband infrastructure would be most affected.

Policymakers should take into account: To mend and prevent unintended consequences, participants noted, policymakers should leverage diverse public participation to identify affected groups, uncover barriers to access based on their lived experience, test how communities react to policies prior to their implementation and account for how the digital divide may affect different groups.

They noted practical policies which may address accessibility issues, such as broadband infrastructure investments for low-income households and the right to repair devices, which could make them more affordable in the second-hand market.

At all costs, they should avoid: Participants emphasised that policymakers should avoid any legislation which would increase the costs of these technologies.

An ideal scenario would be: Participants discussed how the government would ideally offer VR headsets to groups who may benefit from these spaces, such as disabled people and people dealing with anxiety or trauma.

Impact 5: User data being misused and harming vulnerable groups

‘Users should own their data.’

Personal data ownership, and the potential for user data to be misused, was a particular concern among participants.

The groups that will be the most affected are: Participants noted that users with sensitive data and minoritised groups are most affected by this impact. Platform creators were also pointed out as a high-risk group in this regard, due to their dependence on targeted adverts within their financial models.

Policymakers should take into account: Participants noted that to mitigate these risks, policymakers should consider broader data protection regulation and how to balance existing and new legislation to address newly identified gaps.

An example given was recent developments in biometric data sensitivities and how this should prompt reconsideration of policy relating to data transfers. Participants said that policymakers should be aware of the gap between the data being collected from users in ISWs and that which is necessary for them to function, noting that only data that is required for functioning should be collected.

They also emphasised a need for greater transparency and consent around data collection and use, noting that:

‘The trail of where data goes should be made explicit, through transparency and informed and meaningful consent in plain language.’

At all costs, they should avoid: Participants noted that policymakers should prevent users from being discriminated against based on their data and inferences made from this data.

An ideal scenario would be: Ideally, participants noted, private user data should not be misused, and the default model of data collection should be opt-in rather than opt-out.

Impact 6: Potential overindulgence

‘They are like cigarette companies.’

The final principle for change identified by attendees was countering the potential for overindulgence in ISWs.

The groups that will be the most affected are: This impact was pointed out as a particular problem for ‘social outcasts’ and people who may not have a strong social circle. More specifically, participants noted people with poor social skills, neurodivergent people and those from low-income households as particularly at risk.

Policymakers should take into account: In response to this, participants noted, policymakers and healthcare providers should understand that ISW addiction is a legitimate addiction in order for appropriate support to be provided.

Participants said that policymakers should understand the mechanisms at play within the design of these technologies and that platform incentives are not aligned with users’ health.

Platforms should not be allowed to ‘profit from users’ suffering’ or have financial models that allow them to benefit from user overindulgence and dependence.

Participants noted that attention should be paid to the potential impact of lowering the costs of ISW devices to the point that certain activities may be more affordable within digital environments than in physical environments, resulting in disproportionate impacts on lower-income users.

At all costs, they should avoid: Participants noted that while policymakers should not enforce paternalistic policies which limit users’ free choice, they should prevent companies from implementing features which intentionally keep users on platforms. The use of features such as strict timers was suggested.

Case study two: Augmented reality (AR) for warehouse operations

The second workshop for this report revolved around the use of augmented-reality (AR) glasses to augment warehouse operations.

Activity 1: Scene setting

Prior to the workshop, participants were invited to a group introductory meeting to introduce the remit of the workshop, build confidence and relationships, and establish conversation guidelines. At this point, participants were also introduced to immersive technologies and how they could be used in warehouses and given the opportunity to ask any questions.

To support deliberation, participants were presented with a simplified description of a hypothetical AR application resembling real-world applications used in a warehouse context. This was developed based on the literature review and interviews to serve as a starting point for conversations that participants could reference during the workshop, while also drawing on their own experience and knowledge of AR more broadly. An overview of the hypothetical application can be found below.

Application overview

The product is smart glasses sold to an international shipping company for employees to use when picking, packing and posting orders. These glasses are meant to increase productivity and efficiency by displaying instructions that guide employees through each step of picking, packing and posting orders. These smart glasses are set to be piloted in a small number of warehouses to evaluate if they should be used more broadly.

Application functionalities
  • In-depth motion tracking and display: The smart glasses have built-in cameras that tracks the user’s environment and movements in order to process what step of the procedure the user is at. This data is used to provide live feedback through a visual AR layover of what the user is doing.
  • Live instructions: Personalised instructions are presented for every procedure the user does. When picking items, these instructions will lead the user to an aisle number and shelf, aiming to increase productivity. Once the item is found, the AR glasses will check and tell the user whether the correct item was chosen. When the correct item is picked, users can interact with the glasses using their voice, telling the glasses to scan the item, mark the step as complete and continue with the next set of instructions.
Technical specifications
  • Hardware: Cameras and microphones integrated into the glasses collect video recordings of the user’s environment and track their movements, including their eye movements, body movements and speech. This information is fed into the visual instructions displayed and used to track the progression of the employee through their tasks and to update the instructions provided accordingly.
  • Data processing: The supplier of the AR glasses discloses that employee data is not saved to comply with privacy and storage requirements. However, information such as error rates and completion times are recorded and analysed as part of the pilot. The company also requests employees to provide feedback on the technology.

Finally, a significant amount of time in the introduction meeting was dedicated to recognising and proactively addressing the potential power disparities in the workshop: between experts by experience, subject matter experts, Involve and the Ada Lovelace Institute.

Activity 2: Individual impacts

The first activity of the workshop revolved around the individual impacts of using AR in warehouses. Participants were prompted to reflect on their own experiences of the impacts of new technology in their workplace.

They were asked to reflect on their own experience of working in a warehouse, and discuss this in groups by answering the question ‘What happens in a warehouse?’ through the following prompts: ‘Who is there?’ ‘What do you do on a typical day?’ ‘What do you like about it?’ ‘What do you not like about it?’

Overall, participants expressed the difficulty of the work in warehouses, caused by a combination of the physical environment, the cognitive and physical requirements of tasks, targets and strict supervision.

Participants described warehouse environments as large, crowded, busy and noisy, noting that they can be up to the size of two football pitches, with around 300–400 people in them, along with fast-moving vehicles.

When describing their role and their daily routine, participants discussed being trained to work in different departments, each pertaining to tasks such as picking items, sorting items and driving vehicles. Participants described their days as composed of conducting detail-oriented, fast-paced, physically demanding and repetitive tasks.

They mentioned the need to pay significant attention to successfully complete tasks and that if mistakes are made they need to start the work over, how they become exhausted from doing the same thing repeatedly, and how their legs hurt from standing for a lot of the day.

They mentioned that they do not have time to sit and have only a 45-minute break each day. Some of the technology they mentioned using included scanners, voice picking technologies (headsets which deliver instructions and tasks) and LLOPs (low-level order pickers), and a forklift used to transport items in storage.

Participants pointed out targets as a centre point of their work which drive their day-to-day work as they are assigned to each worker (for example, participants noted that Amazon warehouse targets are 200 scans an hour). Targets were emphasised as contributing to a high-pressure and precarious workplace, and it was noted that they are constantly monitored. If a worker doesn’t meet them, they are carried over to the next day and the worker is ‘called to the office’.

Participants discussed strategies for alleviating challenges in their jobs, such as by trying to swap their assigned tasks each day to avoid repetition, and chatting with colleagues in their departments, which they noted was not allowed.

Indeed, strict supervision was another prominent factor dictating participants’ experiences, with many noting being constantly watched by supervisors who ensure ‘there is no slacking’, as well as the use of digital monitors to track targets. Participants stated the variance in ‘toughness’ of supervisors, their role in setting targets and their influence on workers’ job progression.

Participants were then asked to think about a time when a new technology had been introduced to their workplaces and what the impact of this had been, using the following statement as guidance: ‘Think of a time that a new system or type of technology was introduced at work, e.g. digital tracking, inventory software or scanning system.’ Alongside this they were given the following prompts: ‘What happened?’ ‘What changed as a result?’ ‘How did this impact you and other people?’

Most of the technology that participants had experienced being introduced to their workplace consisted of moving from scanners to voice picking technology, as well as using forklift trucks or LLOPs to transport items. Voice picking technology is comparable to the hypothetical AR application participants were presented with, given its use for providing instructions for tasks. Participants described features of this technology, including voice recognition.

While some participants pointed out a lack of change as a result of new technologies being adopted – such as their use not changing factors such as the repetitive nature of tasks – others demonstrated a positive view of voice picking technology, mentioning its ease of use, the practicality of hands-free hardware and what they described as the personal interaction facilitated by its conversational interface.

In some cases, participants pointed out certain aspects of their work which worsened because of its implementation, noting how this impacted their wellbeing. Significantly, challenges in adopting voice picking technology were pointed out, such as the need for training, which was inconsistently offered, and workers having difficulties in learning new technologies, which particularly impacted older staff. Participants mentioned that some benefits of these technologies, such as ease of use, were contingent on workers being appropriately trained.

Participants discussed the impact that voice picking technology has had on their ways of working, noting challenges to their ability to socialise at work brought about by their tasks being performed while wearing headsets with voice recognition, which they would need to take off to feel comfortable talking to others.

They described impacts on the dynamics of supervision, such as the emergence of one-way interactions between the workers and their managers, whereby the manager has constant access to their supervisee but not the other way around, and an increase in worker monitoring and evaluations, with participants noting experiencing privacy violations and feeling recorded.

Finally, health and safety challenges brought about by the adoption of new technology were discussed. Participants noted the increased propensity for accidents as a result of voice pickers interacting with other technologies, such as LLOPs operated by workers wearing voice picking headsets, and there being a lack of training or appropriate instructions for operators.

To help participants gain a clearer understanding of what AR technologies in warehouse contexts might look like, they were presented with a promotional video from a real-world manufacturer of AR glasses currently sold for warehouses, which touched on potential benefits such as simplification of operations and tasks, greater accuracy, and employee performance improvements such as faster task completion.

Participants also heard from a subject matter expert who described their research on the growing use of wearable technologies and their impacts on the quality of life of workers using them, with a focus on human autonomy, interpersonal connection, and health and safety.

This presentation highlighted changes to job quality associated with wearables, including positive changes such as better salaries, career prospects, fairer evaluations, more consistent schedules and more meaningful work. Negative changes included greater chances of job loss, employees receiving less learning, more repetition, greater workload, a higher pace of work, greater surveillance and exposure to abuse, greater health and safety risks, less flexibility, and less sociable hours.

The presentation covered the use of wearable technologies for Algorithmic Affect Management (AAM), which involves using technology to track and monitor human emotions, physical movements and other physiological aspects via biometrics. Overall, the presentation noted a decrease in quality of life for workers who are often or always exposed to wearable technologies.

Following this, participants were given a handout containing the categories of potential impacts elicited by the project team as a reference if needed throughout the workshop. They were then asked to consider the impacts of AR glasses within warehouses on workers with different protected characteristics by creating pen portraits (Figure 5).

To create these pen portraits, participants were given information about three hypothetical workers with diverse demographic and circumstantial characteristics and asked to imagine how they might be impacted, for better or for worse, by the technology.

Participants detailed how the AR glasses would impact their health, wellbeing, privacy, ease of work and connection with other workers, as well as how easy it would be for them to learn to use the technology.

Figure 5: Pen portrait example – AR for warehouse operations

Activity 3: Wider impacts

Following a short break, participants heard presentations from two further subject matter experts on the impact of immersive technologies on privacy, data protection, equity and disability, and on wider societal, economic and environmental impacts.

The first presentation gave an overview of the potential impacts of AR headsets on privacy, data protection, equity and disability. On the topic of privacy and data protection, the presentation covered how the data is recorded by AR headsets, such as eye-tracking and environment recording, the potential to use malicious applications with the headsets, adversarial machine-learning, and how headsets could be used to infringe on others’ privacy such as by recording conversations and collecting bystanders’ biometric data.

Equality and disability risks included factors related to headset design, such as restricted usability among users with different head shapes, headwear and hairstyles, and default lenses being unusable for those with prescription glasses. Factors related to experience design included physical requirements to engage in certain AR experiences, the potential for AR experiences to alter memory and the potential for general adverse mental health responses similar to those which can occur using VR headsets.

Benefits related to equality and disability were also discussed, including the potential use of these technologies as translation tools for signage and foreign languages, live subtitle delivery, healthcare accessibility, the promotion of social behaviour through AR applications such as Pokémon Go, the maintenance and education of cultural heritage through AR experiences such as the use of AR headsets to provide more immersive experiences within museums, and streamlined collaboration through AR’s ability to translate foreign languages and facilitate communication from people from different parts of the world through virtual meetings.

The second presentation gave an overview of the potential wider impacts of the implementation of AR. This covered six potential impacts (freedom of thought, harmful behaviour, economic impacts, skill training, labour impacts and environmental impacts) and the benefit and risks they might have.

Freedom of thought, defined as the ability for people to come together, express themselves and organise around shared ideas, included benefits such as the potential of the technology to facilitate connection and communication between workers, while posing the risk of surveillance through employers frequently tapping in to the headset’s feed and restricting a worker’s ability to take breaks and speak to other workers.

Harmful behaviour, referring to the different ways someone could cause harm to other workers, was said to be a greater risk with the implementation of AR technology, as a result of the ability of managers to display information on employee headsets which is offensive or hurtful, the exploitation of surveillance capabilities resulting in workers being monitored and watched in ways that make them uncomfortable, and the vulnerability of these devices to external hacking, potentially causing them to either malfunction or show false information.

Economic impacts of the technology included the ability of businesses to increase profits through labour efficiency gains, the potential for high sunk costs and the risk for companies making these investments as a result of mixed evidence of productivity gains. The impact on skills training, encompassing the changes that AR glasses could bring about for education and skills training in warehouses, included the possibility of helping workers to learn new skills, as well as their potential to facilitate learning outside of the workplace.

However, the possibility of workers with limited AR experience being excluded from such gains and the resulting effect on employment prospects were also brought up. Some of the labour impacts discussed, encompassing how the technology could affect the workplace more generally, included the potential to augment employee efficiency, making it easier for them to reach targets, navigate the workplace environment more safely and reduce physical stress.

Negative impacts included the potential for the implementation of AR glasses to intensify workplace environments and increase pressure on targets, possible redundancies in light of increased efficiencies, and the potential for mass implementation of AR in the workplace to increase labour market entry barriers, particularly for those without technology experience.

Finally, the environmental impacts discussed included the potential for waste reduction in warehouses as a result of increased efficiency, alongside consequences such as the greater energy consumption needed to use these products and the tendency for the hardware to become obsolete, resulting in greater electronic waste if not appropriately recycled.

These two presentations were followed by a group discussion where participants identified potential impacts, assessed their relative importance and mapped them onto an impacts chart, where they were placed in accordance with their timeline (future–now) and impact (negative–positive). At this point in the workshop, there was no limit to the number of impacts that could be placed on this matrix (see Figure 6).

Figure 6: Identified impacts – AR for warehouse operations

Benefits identified included the potential for AR glasses to ease task performance, with participants noting that there could be increased accuracy and time saved. Participants also noted potential benefits related to language accessibility, and how this technology could serve to translate instructions for workers for whom English is not their first language.

Increased monitoring was also identified as a potential benefit: it could reduce harassment in the workplace through evidence collection for accusations in the workplace; and it could improve health and safety, as the recording of employees could lead to them being more careful and recordings could provide insights into the causes of accidents and evidence that protects workers when accidents happen.

Similarly, increased monitoring was noted as helping workers to concentrate and control themselves at work, reducing socialising due to there being ‘no excuse to talk if headsets tell you what to do’.

Potential negative impacts identified included invasions of privacy, as participants discussed being uncomfortable with always-on video recording devices and not having control over their own data. Participants noted that this could have particularly adverse impacts on individuals who want their identities protected (e.g. migrants) and workers with mental health issues, with concerns also noted about workers and labour welfare representatives not having enough knowledge about the technology to assess and challenge it.

Participants emphasised how the technology could reveal workers’ characteristics, such as accent, religion, sexuality and leisure interests, to managers, contributing to their ability to target workers and exacerbating instances of harassment in the workplace.

Relatedly, participants noted the potential for increased worker monitoring, expressing concerns about feeling micromanaged and about the technology increasing the visibility of employees’ pace of work. Increased monitoring was also associated with the potential for reduced interaction and connection with colleagues, as participants mentioned that the implementation of the technology could result in less socialising.

The potential increase of work pressure was also a common theme, as participants mentioned that productivity benefits of the use of this technology may result in targets being increased and people rushing to meet them, making the work environment more stressful.

In a similar vein, health and safety concerns were also brought up. Participants discussed how tasks such as LLOP driving may be more precarious when using this technology due to workers being distracted by visual displays on the glasses, how the physical strain of using devices may result in mistakes and how challenges in learning to operate the new technology may lead to safety hazards.

Potential health impacts relating to hearing, eyesight, body strain and motion sickness as a result of using the technology were prevalent in participant discussions. This was particularly the case in discussions around accessibility, as participants discussed how not all workers would be able to use the technology or would face significant challenges, such as those at risk of epileptic seizures or with greater photosensitivity, workers with eyesight problems who would require custom prescription for AR glasses, workers with hairstyles and turbans which the technology does not accommodate, workers whose language is not represented in the technology, and older workers who may not have as much experience of using technology.

Job insecurity and salary reductions were also a key concern identified by participants, who contemplated how damaged equipment might result in salary reductions, and the risk of automation from new AR technologies.

Lastly, a lack of confidence in the accuracy and reliability of this technology was mentioned, with some participants stating that it could lack user-friendliness and flexibility and that this could have unforeseen impacts.

Activity 4: Prioritising impacts

In the following activity, participants choose six impacts to prioritise from the pool identified in the previous exercise. To help inform this decision, they received a presentation from a subject matter expert about the rights and protections currently in place to address the impacts of AR in workplace contexts, as well as potential gaps in these rights and protections.

Policies explained in this presentation included the UK Health and Safety at Work Act, which seeks to ensure that employers provide safe working environments, and the UK Health and Safety Display Screen Equipment Regulation, which seeks to ensure that employers protect workers from the health risks of display screen equipment.

Gaps in these policies included not fully addressing unique risks posed by immersive technologies, such as cognitive load and ergonomic stress, and not establishing standards for motion and sensor trackers.

Employment rights, such as the UK Employment Rights Act 1996, which seeks to safeguard workers’ rights with measures such as protection against unfair dismissal, were also discussed. The presentation noted that employment rights do not comprehensively protect against job losses due to automation and do not address the impacts that task fragmentation by automation may have on job satisfaction and security, and it highlighted the lack of UK laws governing the use of AI tools at work.

Lastly, privacy and data protection policies, including the UK General Data Protection Regulation (UK GDPR), which establishes the lawful basis for monitoring, and technology-specific guidelines such as guidance from Information Commissioner’s Office on surveillance and data processing in the workplace, were discussed.

The insufficient provision of regulation addressing the extent and manner of surveillance of these technologies was explained, as well as the lack of clarity on policies governing the balance between efficiency and privacy, and of technology-specific guidance addressing the ethical implications of human–machine interaction and the psychological impact of increased surveillance and task control.

Following this, participants heard information about factors to consider when determining what impacts were most important, using the same information shared in the workshop on ISWs. Participants used these considerations to deliberate about which of the impacts they had come up with so far were the most important. They chose the six most important impacts for them:

  1. Invasion of privacy.
  2. Enabling targeting and harassment.
  3. Increased risks to health and safety.
  4. Impact on communication at work.
  5. Impacts on targets.
  6. Reduced social interaction and connection.

Activity 5: Drafting principles for change

This activity allowed participants to explore, discuss and understand how AR works, the impact it could have on their and others’ lives, and wider societal impacts. The knowledge and perspectives collated through previous exercises allowed them to effectively participate in the final activity to produce the main outputs from the workshop – principles for change. They were given a template which included the following sections for them to fill in:

  • the groups that will be the most affected are:
  • policymakers should take into account:
  • at all costs, they should avoid:
  • an ideal scenario would be:

These template sections have also been used below to structure participants’ feedback on principles for change for the identified impacts.

Outputs: Principles for change

Impact 1: Invasion of privacy

‘Workers should have a choice about what data is used.’

Participants noted the importance of workers having control over what data is used.

The groups that will be the most affected are: This impact was expressed as a particular concern for introverts and naturally private people, those who cannot share their identity due to circumstances such as witness protection, people of ethnicities and religions which may face micro-harassment, and women who may be discussing issues they prefer to keep private.

Policymakers should take into account: To counter impacts on affected groups, participants discussed how policymakers should ensure that user data is used only when it is essential and that workers have an understanding of what their data is used for.

Participants highlighted the significance of hierarchical structure in their workplace context when designing policies, to avoid situations such as direct line managers having access to employee data rather than someone outside of the line management chain.

Alongside this, participants noted that policymakers should consider approaches to enable pausing data collection or the removal of some activities (such as breaks) from being recorded. The need for unions to adapt their remit to new data issues was acknowledged.

It was also acknowledged that the data that these systems collect could also be used positively, such as for performance rewards or to evidence instances of harassment of employees.

At all costs, they should avoid: Participants noted that policymakers should avoid facilitating situations where workers’ data can be used against them.

An ideal scenario would be: Participants noted that, ideally, a third party would handle employee data and could externally decide what is shared with the employer.

Impact 2: Enabling targeting and harassment

‘There is harassment about football, religion, sexuality, the way people talk, anything.’

Attendees also discussed how this technology could enable harassment towards workers.

The groups that will be most affected are: Participants mentioned that workers who are immigrants, those who work for agencies, women, new staff, older people and disabled people could be particularly affected.

Policymakers should take into account: Participants discussed that policymakers must acknowledge the prevalence of harassment in warehouses, and its common root in micromanagement and pressures on workers to work faster. They noted that increased monitoring via AR glasses is likely to worsen harassment.

To prevent this, participants noted that there must be a limit on what data can accessed by managers. Finally, the role of job security must be considered, particularly in the case of agency staff given their heightened vulnerability to targeting and harassment.

At all costs, they should avoid: Participants noted that policymakers should avoid enabling actions such as 24-hour monitoring that might give more power to managers.

An ideal scenario would be: Ideally, participants noted, there must be better institutionalised reporting and penalties for harassment, with workers having the power to request evidence such as recordings of instances of harassment.

Impact 3: Increased risks to health and safety

‘Accidents happen mostly because of a lack of training and no map or directions.’

The increased risks to the health and safety of workers that accompany the use of AR in a warehouse context were also a prominent theme identified in the workshop discussion.

The groups that will be most affected are: This impact was noted as impacting all parties involved in a warehouse context, including workers, contractors and companies themselves.

Policymakers should take into account: Participants noted that at the very least, the technology deployed in a warehouse should meet current health and safety standards. These should be updated to be appropriate for each new iteration of the technology and prioritise its inclusivity and ease of use.

Participants highlighted that sufficient safety training should be a prerequisite for operating the technology, and that this requires a multifaceted approach that includes workers getting paid for training within office hours, which is rolled out prior to the deployment of the technology. Moreover, participants noted that ways of working must be adjusted to support the safe use of the technology, such as by increasing breaks for workers using the technology to avoid tiredness.

At all costs, they should avoid: Participants noted that policymakers should avoid unfairly delegating responsibility to workers for health and safety at work.

An ideal scenario would be: Participants noted that, ideally, all workers would be trained to use the technology safely and it would be ensured that everyone feels responsible for health and safety at work.

Impact 4: Impacts on communication at work

‘There isn’t an option for different languages. If there was, it could be helpful as [my colleague] is still struggling with English.

The impacts of the AR product on communication were also identified as a priority for change, particularly with respect to the technology’s frequent lack of multilingual capabilities and alternative forms of communication.

The groups that will be the most affected are: This impact was identified as a problem especially for workers who do not speak English fluently or who are from countries where English is not spoken as a first language, and for people with impaired hearing or speech.

Policymakers should take into account: Participants discussed that policymakers should consider that AR could be used to facilitate communication between workers by supporting translation between fellow workers, or between workers and their managers.

They noted that the accessibility and user-friendliness of the technology should be a priority, with enhanced ability to recognise different accents, provide subtitles and sync with workers’ hearing aids.

Finally, they stressed that the technology should support languages appropriate to the linguistic diversity of the workers who use it and should be fair to all workers.

At all costs, they should avoid: Policymakers should ensure that harassment or bullying of workers with limited English language ability is not enabled or facilitated.

An ideal scenario would be: The implementation of this technology should augment communication at work and allow everyone to understand each other better.

Impact 5: Impacts on targets

‘Targets will grow – there will be more to do and more monitoring.’

Attendees pointed out the impact that AR could have on targets within the warehouse environment, particularly from the perspective of target tracking and increased pressure.

The groups that will be most affected are: Participants identified older workers, people with different learning skills, and disabled workers as particularly vulnerable to this impact.

Policymakers should take into account: Participants noted that policymakers should ensure that appropriate support for achieving targets is provided to the diverse range of workers represented in warehouse contexts.

They noted practical forms of support such as inclusive training, allocating workers to tasks for which their skills are best suited, and ensuring appropriate health and safety systems and equipment. Participants also noted that policymakers should consider the impact that targets can have on breaks and should safeguard the appropriate time for breaks.

At all costs, they should avoid: Participants noted that policymakers should pay special attention to preventing situations where jobs could be lost as a result of this technology, due either to redundancy or to challenges in adapting to using it.

An ideal scenario would be: Participants noted that, ideally, the successful implementation of this technology would allow them to work more smoothly, with the psychological benefits of reduced management and a greater sense of control.

Impact 6: Reduced social interaction and connection

‘I like chatting with other colleagues, but it’s not allowed.’

Participants’ final priority in this workshop was the impact of this technology on their ability to interact and connect with colleagues.

The groups that will be most affected are: Participants noted that while this impact applies to all workers using the technology, who may feel overloaded with work if they cannot connect to others, it is particularly significant for those who speak a different language to the one that the system functions in, due to the potential for them to be isolated from those with similar backgrounds.

Policymakers should take into account: Participants highlighted that policymakers should ensure workers have sufficient freedom at work to interact socially. They said that policymakers should encourage changes within existing workflows, such as a shift towards group work and targets rather than individual workflows, highlighting how collaborating with others helps with sharing skills and increasing productivity, allowing workers to vary their tasks, and creating opportunities for progression. Lastly, the importance of sufficient ‘quality’ breaks was emphasised.

At all costs, they should avoid: Participants noted that policymakers should prevent ‘the little freedom workers have’ from being rolled back as a result of the adoption of AR.

An ideal scenario would be: Participants noted that an ideal scenario would take the form of establishing frequent and high-quality breaks.

Footnotes

[1] Cami Rincon and Jorge Perez, ‘What Are Immersive Technologies?’ (Ada Lovelace Institute, 5 March 2025) <www.adalovelaceinstitute.org/resource/immersive-technologies-explainer/> accessed 17 June 2025.

[2] Ibid.

[3] Cami Rincon and Jorge Perez ‘Reality Check’ (Ada Lovelace Institute, 2025) <https://www.adalovelaceinstitute.org/report/reality-check/> accessed 23 September 2025.

[4] Ada Lovelace Institute, Mission Critical (October 2023) <https://www.adalovelaceinstitute.org/policy-briefing/ai-safety/> accessed 10 May 2025.

[5] Ibid.

[6] ‘Google Demos Android XR Glasses at I/O with Live Language Translation’ (Engadget, 20 May 2025) <www.engadget.com/wearables/google-demos-android-xr-glasses-at-io-live-translation-191510280.html> accessed 10 June 2025.

[7] James Tyrrell, ‘AR in Supply Chain: e-Commerce Applications Explained’ (TechHQ, 10 March 2023) <https://techhq.com/news/augmented-reality-in-supply-chain-e-commerce-applications-explained/> accessed 17 July 2025.

[8] Prajakt Pande and others, ‘Long-Term Effectiveness of Immersive VR Simulations in Undergraduate Science Learning: Lessons from a Media-Comparison Study’ (2021) 29 Research in Learning Technology <https://eric.ed.gov/?id=EJ1293535> accessed 9 June 2025.

[9] P5, P9, P15.

[10] P1; Henry Matovu and others, ‘Immersive Virtual Reality for Science Learning: Design, Implementation, and Evaluation’ (2023) 59 Studies in Science Education 205.

[11] Guido Makransky, Thomas S Terkildsen and Richard E Mayer, ‘Adding Immersive Virtual Reality to a Science Lab Simulation Causes More Presence but Less Learning’ (2019) 60 Learning and Instruction 225.

[12] Arham I Iqbal and others, ‘Immersive Technologies in Healthcare: An In-Depth Exploration of Virtual Reality and Augmented Reality in Enhancing Patient Care, Medical Education, and Training Paradigms’ (2024) 15 Journal of Primary Care & Community Health 21501319241293311.

[13] Sofia Garcia Fracaro and others, ‘Immersive Technologies for the Training of Operators in the Process Industry: A Systematic Literature Review’ (2022) 160 Computers & Chemical Engineering 107691.

[14] Zhenan Feng and others, ‘Immersive Virtual Reality Training for Excavation Safety and Hazard Identification’ (2023) 13 Smart and Sustainable Built Environment 883.

[15] P5.

[16] Fatemeh Javaheri and Mohammad Safaei, ‘Evaluating the Impact of VR Interfaces on User Productivity in Manufacturing Environments’ (2024) 2 International Journal of Advanced Human Computer Interaction <www.ijahci.com/index.php/ijahci/article/view/11> accessed 9 June 2025.

[17] Valentina Di Pasquale and others, ‘Virtual Reality for Training in Assembly and Disassembly Tasks: A Systematic Literature Review’ (2024) 12 Machines 528.

[18] P3.

[19] Joanna Glasner, ‘Metaverse and Augmented Reality Remain Unpopular With VCs’ (Crunchbase News, 21 June 2024) <https://news.crunchbase.com/venture/metaverse-ar-vr-ai-aapl-meta/> accessed 15 May 2025.

[20] P2, P12, P15.

[21] P20, P25

[22] P24.

[23] P2, P10, P12, P22.

[24] P2.

[25] P22.

[26] P22; Lena Cohen, ‘Mad at Meta? Don’t Let Them Collect and Monetize Your Personal Data’ (Electronic Frontier Foundation, 17 January 2025) <www.eff.org/deeplinks/2025/01/mad-meta-dont-let-them-collect-and-monetize-your-personal-data> accessed 23 May 2025.

[27] P12.

[28] P12.

[29] ‘Future of Augmented Reality: A Comprehensive Study on Apple Vision Pro’ (EBSCO)  <https://openurl.ebsco.com/EPDB%3Agcd%3A3%3A5927139/detailv2?sid=ebsco%3Aplink%3Ascholar&id=ebsco%3Agcd%3A179699113&crl=c&link_origin=scholar.google.com> accessed 17 January 2025.

[30] P24.

[31] P9.

[32] See Appendix – Case study two: Augmented reality in warehouse environments.

[33] P11, P14, P20, P23.

[34] P11.

[35] P6, P8.

[36] P6, P8, P9, P11, P12, P15, P17, P19, P21, P23 P25.

[37] ‘Welcome to Decentraland’ (Decentraland) <https://decentraland.org/> accessed 25 November 2024.

[38] ‘How Immersive Technology, Blockchain and AI Are Converging’ (World Economic Forum, 21 June 2024) <https://www.weforum.org/stories/2024/06/the-technology-trio-of-immersive-technology-blockchain-and-ai-are-converging-and-reshaping-our-world/> accessed 17 June 2025.

[39] P23.

[40] Emmie Hine and others, ‘Safety and Privacy in Immersive Extended Reality: An Analysis and Policy Recommendations’ (2024) 3 Digital Society 33.

[41] P12.

[42] P23.

[43] P12; Chris Welch, ‘Meta Tightens Privacy Policy around Ray-Ban Glasses to Boost AI Training’ (The Verge, 30 April 2025) <www.theverge.com/news/658602/meta-ray-ban-privacy-policy-ai-training-voice-recordings> accessed 22 May 2025.

[44] As covered in our immersive technologies explainer, these fall within our definition of immersive technologies given their ability to overlay audio information. Cami Rincon and Jorge Perez, ‘What Are Immersive Technologies?’ (Ada Lovelace Institute, 5 March 2025) <www.adalovelaceinstitute.org/resource/immersive-technologies-explainer/> accessed 17 June 2025.

[45] Bianca Kronemann and others, ‘How AI Encourages Consumers to Share Their Secrets? The Role of Anthropomorphism, Personalisation, and Privacy Concerns and Avenues for Future Research’ (2023) 27 Spanish Journal of Marketing – ESIC 3; Niloofar Mireshghallah and others, ‘Trust No Bot: Discovering Personal Disclosures in Human–LLM Conversations in the Wild’ (arXiv, 20 July 2024) <http://arxiv.org/abs/2407.11438> accessed 22 May 2025.

[46] Russell Brandom, ‘Even If You’re Not Signed Up, Facebook Has a Shadow Profile for You’ (The Verge, 11 April 2018) <www.theverge.com/2018/4/11/17225482/facebook-shadow-profiles-zuckerberg-congress-data-privacy> accessed 9 June 2025.

[47] Suchismita Pahi and Calli Schroeder, ‘Extended Privacy for Extended Reality: XR Technology Has 99 Problems and Privacy Is Several of Them’ (Social Science Research Network, 28 August 2022) <https://papers.ssrn.com/abstract=4202913> accessed 21 November 2024.

[48] P2, P4, P7, P9, P12, P15, P17, P22.

[49] P3, P20, P19.

[50] See Appendix – Case study two: Augmented reality in warehouse contexts.

[51] P9, P19, P21.

[52] P19.

[53] ‘New OECD Report on Algorithmic Management Reveals Urgent Need for Worker Protections’ (TUAC) <https://tuac.org/news/new-oecd-report-on-algorithmic-management-reveals-urgent-need-for-worker-protections/> accessed 29 April 2025.

[54] Kaishi Naito and others, ‘Passive Fatigue Assessment in Augmented Reality Workspaces: Behavioral Cues Indicators for Workers with Intellectual Disabilities’, 2024 International Conference on Cyberworlds (CW) (2024) <https://ieeexplore.ieee.org/abstract/document/10917718> accessed 10 April 2025.

[55] Hui Wen Loh and others, ‘Automated Detection of ADHD: Current Trends and Future Perspective’ (2022) 146 Computers in Biology and Medicine 105525.

[56] ‘Epic Roller Coasters on Meta Quest’ (Oculus) <www.meta.com/en-gb/experiences/epic-roller-coasters/2299465166734471/> accessed 18 July 2025.

[57] ‘Mission: Mars on Oculus Rift’ (Oculus) <www.meta.com/en-gb/experiences/pcvr/mission-mars/3165066883560962/> accessed 18 July 2025.

[58] ‘Consumer Autonomy and Social Technology: The Case of Social Media Algorithms and the Metaverse’ (ResearchGate) <www.researchgate.net/publication/377133341_Consumer_Autonomy_and_Social_Technology_The_Case_of_Social_Media_Algorithms_and_the_Metaverse> accessed 9 June 2025.

[59] P1, P3, P6, P7, P12, P19, P25.

[60] P1.

[61] P6.

[62] P6.

[63] Pier Paolo Tricomi and others, ‘You Can’t Hide Behind Your Headset: User Profiling in Augmented and Virtual Reality’ (2023) 11 IEEE Access 9859.

[64] P19.

[65] ‘Coming AI-Driven Economy Will Sell Your Decisions Before You Take Them, Researchers Warn’ (University of Cambridge, 30 December 2024) <www.cam.ac.uk/research/news/coming-ai-driven-economy-will-sell-your-decisions-before-you-take-them-researchers-warn> accessed 29 April 2025.

[66] P7, P11.

[67] P11.

[68] P15.

[69] ‘Learn about Advertising IDs on Meta Quest’ (Meta) <www.meta.com/en-gb/help/quest/616468144164678/> accessed 9 June 2025.

[70] P4.

[71] Jaime Guixeres and others, ‘Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising’ (2017) 8 Frontiers in Psychology 1808.

[72] Sukkyung You, Euikyung Kim and Donguk Lee, ‘Virtually Real: Exploring Avatar Identification in Game Addiction among Massively Multiplayer Online Role-Playing Games (MMORPG) Players’ (2017) 12 Games and Culture 56.

[73] HS Kim, RD Leslie and SH Stewart, ‘A Scoping Review of the Association between Loot Boxes, Esports, Skin Betting, and Token Wagering with Gambling and Video Gaming Behaviors’ (2023) 12(2) Journal of Behavioral Addictions 309  <https://akjournals.com/view/journals/2006/12/2/article-p309.xml> accessed 9 July 2025.

[74] Ljubisa Bojic, ‘Metaverse through the Prism of Power and Addiction: What Will Happen When the Virtual World Becomes More Attractive Than Reality?’ (2022) 10 European Journal of Futures Research 22.

[75] See Appendix – Case study one: Immersive social worlds.

[76] Ned Cooper and Alexandra Zafiroglu, ‘Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models’ (arXiv, 27 August 2024) <http://arxiv.org/abs/2408.15066> accessed 23 May 2025.

[77] Yue Lyu and others, ‘EMooly: Supporting Autistic Children in Collaborative Social-Emotional Learning with Caregiver Participation through Interactive AI-Infused and AR Activities’ (2024) 8 Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1.

[78] Carl H Smith and Judith Molka-Danielsen, ‘The World as an Interface: Exploring the Ethical Challenges of the Emerging Metaverse’ (2023) Hawaii International Conference on System Sciences 2023 (HICSS-56) 3.

[79] P3, P8, P15, P21, P24.

[80] P41.

[81] P3.

[82] Andrea Willige, ‘Deepfake legislation: Denmark takes action’ (World Economic Forum, 30 July 2025) <https://www.weforum.org/stories/2025/07/deepfake-legislation-denmark-digital-id> accessed 17 August 2025.

[83] C Gerdenitsch, A Sackl and P Hold, ‘Augmented Reality Assisted Assembly: An Action Regulation Theory Perspective on Performance and User Experience’ (2022) 92 International Journal of Industrial Ergonomics 103384

[84] P1, P3, P8, P10, P12, P13, P18, P19, P20, P25.

[85] P10.

[86] P1.

[87] P8.

[88] P1, P18.

[89] P18.

[90] James Hutson, ‘Social Virtual Reality: Neurodivergence and Inclusivity in the Metaverse’ (2022) 12 Societies 102.

[91] ‘Thrive Pavilion’ <https://thrivepavilion.org/> accessed 9 June 2025; ‘Metaverse Combats Loneliness in Disabled Community’ (Able2UK, 1 March 2023) <www.able2uk.com/stop-the-shadows/stop-the-shadows-news/metaverse-combats-loneliness-in-disabled-community> accessed 9 June 2025.

[92] P25.

[93] P19.

[94] P25.

[95] Nexhmedin Morina and others, ‘Meta-Analysis of Virtual Reality Exposure Therapy for Social Anxiety Disorder’ 53 Psychological Medicine 2176.

[96] ‘Virtual Reality as a Medium to Elicit Empathy: A Meta-Analysis’ (2020) 23(10) Cyberpsychology, Behavior and Social Networking <www.liebertpub.com/doi/abs/10.1089/cyber.2019.0681> accessed 24 June 2025.

[97] Nicola S Schutte and Emma J Stilinović, ‘Facilitating Empathy through Virtual Reality’ (2017) 41 Motivation and Emotion 708.

[98] Elizabeth Dyer, Barbara J Swartzlander and Marilyn R Gugliucci, ‘Using Virtual Reality in Medical Education to Teach Empathy’ (2018) 106 Journal of the Medical Library Association: JMLA 498.

[99] P19.

[100] Jon Rueda and Francisco Lara, ‘Virtual Reality and Empathy Enhancement: Ethical Aspects’ (2020) 7 Frontiers in Robotics and AI <www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2020.506984/full> accessed 24 June 2025.

[101] P19.

[102] P Kourtesis ‘The Extended Mind and Body in Extended Realities: A Scoping Review of XR Applications and Risks in the Metaverse’ (ResearchGate, September 2024) <www.researchgate.net/publication/384088506_The_Extended_Mind_Body_in_Extended_Realities_A_Scoping_Review_of_XR_Applications_and_Risks_in_the_Metaverse> accessed 28 February 2025.

[103] Alessandra Pellicano, ‘Efficacy of Virtual Reality Exposure Therapy in Treating Post-Traumatic Stress Disorder’ (2023) 50 Archives of Clinical Psychiatry <https://archivespsy.com/menu-script/index.php/ACF/article/view/2188> accessed 9 June 2025.

[104] ﷟Kate Granger, ‘Importance of Communication to Patient Experience’ (Royal College of Surgeons, 29 May 2015) <https://www.rcseng.ac.uk/news-and-events/blog/importance-of-communication/> accessed 16 May 2025.

[105] Erika Simpson, ‘Cutting-Edge Dynamics of Drone Technologies: Military Strategies and Peaceful Innovations’ (2024) 18 In Factis Pax: Journal of Peace Education and Social Justice <https://openjournals.utoledo.edu/index.php/infactispax/article/view/1278> accessed 17 June 2025.

[106] Madalyn Javier and others, ‘The Alt-Self: Investigating the Inclusivity of Self-Avatar Representations in Social VR’, Companion Proceedings of the 2024 Annual Symposium on Computer-Human Interaction in Play (Association for Computing Machinery 2024) <https://dl.acm.org/doi/10.1145/3665463.3678783> accessed 11 April 2025.

[107] P17.

[108] Emmie Hine, Josh Cowls and Luciano Floridi, ‘Assembly and Expression in Extended Reality: Transposing Human Rights across Realities’ (2025) 8 Interactive Entertainment Law Review 66.

[109] Ibid.

[110] Farzam Kharvari and Lorenz Ewald Kaiser, ‘Impact of Extended Reality on Architectural Education and the Design Process’ (2022) 141 Automation in Construction 104393.

[111] P19.

[112] Jeremy Hsu, ‘US Government Is Using AI for Unprecedented Social Media Surveillance’ (New Scientist, 5 May 2025) <www.newscientist.com/article/2479045-us-government-is-using-ai-for-unprecedented-social-media-surveillance> accessed 18 July 2025.

[113] Marc Groman, ‘No One Should Be Outed By an Ad’ (IAPP, 24 February 2015) <https://iapp.org/news/a/nai-takes-lgbt-stand> accessed 9 June 2025.

[114] Oliver L Haimson and others, ‘Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas’ (2021) 5 Proceedings of the ACM on Human–Computer Interaction 1.

[115] Emmie Hine, ‘Content Moderation in the Metaverse Could Be a New Frontier to Attack Freedom of Expression’ (2023) 36 Philosophy & Technology 43.

[116] P3.

[117] Emmie Hine, Josh Cowls and Luciano Floridi, ‘Assembly and Expression in Extended Reality: Transposing Human Rights across Realities’ (2025) 8 Interactive Entertainment Law Review 66.

[118] ‘Supernatural: Unreal Fitness on Meta Quest’ (Oculus) <www.meta.com/en-gb/experiences/supernatural-unreal-fitness/1830168170427369/> accessed 9 June 2025.

[119] ‘Beat Saber – VR Rhythm Game’ <https://beatsaber.com/> accessed 9 June 2025.

[120] Hyun Jung Oh and others, ‘Social Benefits of Living in the Metaverse: The Relationships among Social Presence, Supportive Interaction, Social Self-Efficacy, and Feelings of Loneliness’ (2023) 139 Computers in Human Behavior 107498.

[121] ‘What Is VR Teletherapy?’ (Society for Virtual Reality Therapy) <www.svrt.org/what-is-vr-teletherapy> accessed 9 June 2025.

[122] P11.

[123] Liana Spytska, ‘The Use of Virtual Reality in the Treatment of Mental Disorders such as Phobias and Post-Traumatic Stress Disorder’ (2024) 6 SSM – Mental Health 100351 <www.sciencedirect.com/science/article/pii/S2666560324000562> accessed 10 June 2025.

[124] Anna Felnhofer and others, ‘Barriers to Adopting Therapeutic Virtual Reality: The Perspective of Clinical Psychologists and Psychotherapists’ (2025) 16 Frontiers in Psychiatry <www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2025.1549090/full> accessed 27 June 2025.

[125] P9, P12, P11, P23; Akinloluwa Babalola and others, ‘Applications of Immersive Technologies for Occupational Safety and Health Training and Education: A Systematic Review’ (2023) 166 Safety Science 106214.

[126] P12.

[127] Joseph Henry ‘British Forces Train on “Kamikaze Drones” with VR and Gaming Controllers’ (Tech Times, 5 September 2024) <www.techtimes.com/articles/307391/20240905/british-forces-train-kamikaze-drones-vr-gaming-controllers.htm> accessed 9 June 2025.

[128] ‘Archive’ (World Economic Forum) <www.weforum.org/publications/creative-disruption-the-impact-of-emerging-technologies-on-the-creative-economy/articles/> accessed 27 June 2025.

[129] P8, P3, P6, P12, P15, P20, P21, P26.

[130] Jeroen S Lemmens, Patti M Valkenburg and Jochen Peter, ‘Psychosocial Causes and Consequences of Pathological Gaming’ (2011) 27 Computers in Human Behavior 144.

[131] P1, P8, P3.

[132] Miguel Barreda-Ángeles and Tilo Hartmann, ‘Hooked on the Metaverse? Exploring the Prevalence of Addiction to Virtual Reality Applications’ (2022) 3 Frontiers in Virtual Reality <www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2022.1031697/full> accessed 9 June 2025.

[133] Emmie Hine and others, ‘Safety and Privacy in Immersive Extended Reality: An Analysis and Policy Recommendations’ (2024) 3 Digital Society 33.

[134] Ibid.

[135] Ibid.

[136] Stephen Palmisano, Robert S Allison and Juno Kim, ‘Cybersickness in Head-Mounted Displays Is Caused by Differences in the User’s Virtual and Physical Head Pose’ (2020) 1 Frontiers in Virtual Reality <www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2020.587698/full> accessed 9 June 2025.

[137] Eunjee Kim and Gwanseob and Shin, ‘User Discomfort While Using a Virtual Reality Headset as a Personal Viewing System for Text-Intensive Office Tasks’ (2021) 64 Ergonomics 891.

[138] Emmie Hine and others, ‘Safety and Privacy in Immersive Extended Reality: An Analysis and Policy Recommendations’ (2024) 3 Digital Society 33.

[139] Christian Krekel, George Ward and Jan-Emmanuel De Neve, ‘Employee Wellbeing, Productivity, and Firm Performance’ (Social Science Research Network, 3 March 2019) <https://papers.ssrn.com/abstract=3356581> accessed 18 July 2025.

[140] ‘What Are Kids Doing in the Metaverse?’ (Common Sense Media, 23 March 2022) <www.commonsensemedia.org/kids-action/articles/what-are-kids-doing-in-the-metaverse> accessed 17 June 2025; Boone Ashworth, ‘Meta Horizon Worlds Has Been Taken Over by Children’ (Wired <www.wired.com/story/meta-horizon-worlds-taken-over-by-children> accessed 17 June 2025.

[141] This section is not divided into benefits and risks, as none of the identified impacts under this theme were beneficial.

[142] P6, P11.

[143] P11.

[144] Jessica A Robinson, ‘“I Ain’t No Girl”: Exploring Gender Stereotypes in the Video Game Community’ (2023) 87 Western Journal of Communication 857.

[145] Cody Mejeur and Amanda Cote, ‘Who Gets to Be in The Guild? Race, Gender and Intersecting Stereotypes in Gaming Cultures’ (2021) 14 Loading: The Journal of the Canadian Game Studies Association 70.

[146] Lars de Wildt and SD and Aupers, ‘Eclectic Religion: The Flattening of Religious Cultural Heritage in Videogames’ (2021) 27 International Journal of Heritage Studies 312.

[147] P3, P10.

[148] P3.

[149] P3.

[150] Sang-Su Lee, ‘Regulation of Criminal Laws against Sexual Crimes and Stalking in the Metaverse’ [2024] 저스티스 176.

[151] Verity McIntosh and Catherine Allen, ‘What Do Policymakers Need to Know about Harassment in the Metaverse?’ (2024) 5 Frontiers in Virtual Reality <www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2024.1443384/full> accessed 20 June 2025.

[152] P7.

[153] P3.

[154] P11.

[155] P1, P19.

[156] P18.

[157] P10.

[158] P3.

[159] ‘Welcome to Decentraland’ (Decentraland) <https://decentraland.org/> accessed 25 November 2024.

[160] Jiajing Wu and others, ‘What Financial Crimes Are Hidden in Metaverse? Taxonomy and Countermeasures’ in Huawei Huang, Jiajing Wu and Zibin Zheng (eds), From Blockchain to Web3 and Metaverse (Springer Nature 2023) <https://doi.org/10.1007/978-981-99-3648-9_7> accessed 20 June 2025.

[161] P3.

[162] P19.

[163] P5.

[164] Alex Ambrose, ‘Comments to Standards Australia Regarding Children’s Safety in the Metaverse’ (Information Technology & Innovation Foundation, 24 January 2025) <https://itif.org/publications/2025/01/24/comments-to-standards-australia-regarding-childrens-safety-in-the-metaverse/> accessed 20 June 2025.

[165] P17.

[166] Emmie Hine, ‘Content Moderation in the Metaverse Could Be a New Frontier to Attack Freedom of Expression’ (2023) 36 Philosophy & Technology 43.

[167] James Hutson and Piper Hutson, ‘Immersive Technologies’ in James Hutson and Piper Hutson (eds), Inclusive Smart Museums: Engaging Neurodiverse Audiences and Enhancing Cultural Heritage (Springer Nature Switzerland 2024) <https://doi.org/10.1007/978-3-031-43615-4_5> accessed 9 June 2025.

[168] ‘What Is VR Teletherapy?’ (Society for Virtual Reality Therapy) <www.svrt.org/what-is-vr-teletherapy> accessed 9 June 2025.

[169] Emmie Hine, Josh Cowls and Luciano Floridi, ‘Assembly and Expression in Extended Reality: Transposing Human Rights across Realities’ (2025) 8 Interactive Entertainment Law Review 66.

[170] P5.

[171] Souhir Sghaier, Abir Osman Elfakki and Abdullah Alhumaidi Alotaibi, ‘Development of an Intelligent System based on Metaverse Learning for Students with Disabilities’ (2022) 9 Frontiers in Robotics and AI <www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2022.1006921/full> accessed 9 June 2025.

[172] Noor-Un-Nissah Soogund and Minnu Helen Joseph, ‘SignAR: A Sign Language Translator Application with Augmented Reality Using Text and Image Recognition’, 2019 IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS) (2019) <https://ieeexplore.ieee.org/abstract/document/8951322> accessed 9 June 2025.

[173] Jayasri Sai Nikitha Guthula and Aryabrata Basu, ‘Navigating Gender Biases in XR: Towards Equitable Technological Future’, 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (2024) <https://ieeexplore.ieee.org/document/10536464/?arnumber=10536464> accessed 7 March 2025.

[174] Chris Creed and others, ‘Inclusive AR/VR: Accessibility Barriers for Immersive Technologies’ (2023) Universal Access in the Information Society <https://link.springer.com/10.1007/s10209-023-00969-0> accessed 25 October 2023.

[175] P11.

[176] See Appendix – Case study one: Immersive social worlds.

[177] Alesia Zhuk, ‘Ethical Implications of AI in the Metaverse’ [2024] AI and Ethics <https://doi.org/10.1007/s43681-024-00450-5> accessed 7 March 2025.

[178] P8, P3, P6, P17, P18.

[179] Lisa Macpherson, ‘What Does Research Tell Us About Technology Platform “Censorship”?’ (Tech Policy Press, 20 May 2025) <https://techpolicy.press/what-does-research-tell-us-about-technology-platform-censorship> accessed 18 June 2025.

[180] Hadas Kotek, Rikker Dockum and David Sun, ‘Gender Bias and Stereotypes in Large Language Models’, Proceedings of the ACM Collective Intelligence Conference (Association for Computing Machinery 2023). <https://dl.acm.org/doi/10.1145/3582269.3615599> accessed 7 March 2025.

[181] Reece Rogers and Victoria Turk, ‘OpenAI’s Sora Is Plagued by Sexist, Racist, and Ableist Biases’ (Wired, 23 March 2025) <www.wired.com/story/openai-sora-video-generator-bias> accessed 26 June 2025.

[182] ‘Official Release of Metaverse Seoul’ (Seoul Metropolitan Government) <https://english.seoul.go.kr/official-release-of-metaverse-seoul/> accessed 14 April 2025.

[183] Stefan P Thoma and others, ‘Increasing Awareness of Climate Change with Immersive Virtual Reality’ (2023) 4 Frontiers in Virtual Reality <www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2023.897034/full> accessed 9 June 2025.

[184] Yétindranathsingh Dhunnoo and others, ‘Improving Climate Change Awareness through Immersive Virtual Reality Communication: A Case Study’ (2023) 15 Sustainability 12969.

[185] Carles Sora-Domenjó, ‘Disrupting the “Empathy Machine”: The Power and Perils of Virtual Reality in Addressing Social Issues’ (2022) 13 Frontiers in Psychology <www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2022.814565/full> accessed 4 July 2025.

[186] Nir Kshetri and Yogesh K Dwivedi, ‘Pollution-Reducing and Pollution-Generating Effects of the Metaverse’ (2023) 69 International Journal of Information Management 102620.

[187] P25.

[188] ‘Renault Group Launches the First Industrial Metaverse’ (Renault) <www.press.renault.co.uk/releases/3024> accessed 9 June 2025.

[189] ‘PepsiCo Optimizes Distribution Centers With NVIDIA Omniverse’ (NVIDIA) <https://resources.nvidia.com/en-us-omniverse-industrial-digital-twins/pepsico-simulates> accessed 9 June 2025.

[190] Nir Kshetri and Yogesh K Dwivedi, ‘Pollution-Reducing and Pollution-Generating Effects of the Metaverse’ (2023) 69 International Journal of Information Management 102620.

[191] ‘PS4 Shipments Quietly Coming to an End, It Seems’ (Yahoo Tech, 31 March 2025) <https://tech.yahoo.com/articles/ps4-shipments-quietly-coming-end-133956719.html> accessed 17 June 2025.

[192] Cesar Cadenas, ‘Meta Is Cutting Off Support for the Original Quest Headset at the End of April’ (TechRadar, 2 April 2024) <www.techradar.com/computing/virtual-reality-augmented-reality/meta-is-cutting-off-support-for-the-original-quest-headset-at-the-end-of-april> accessed 17 June 2025.

[193] P21, P1.

[194] Nir Kshetri and Yogesh K Dwivedi, ‘Pollution-Reducing and Pollution-Generating Effects of the Metaverse’ (2023) 69 International Journal of Information Management 102620.

[195] ‘Reality Check’ (Ada Lovelace Institute, 2025) <https://www.adalovelaceinstitute.org/report/reality-check/> accessed 23 September 2025.

[196] ‘AI Opportunities Action Plan’ (Department for Science, Innovation and Technology, 2025) <www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan> accessed 18 June 2025.

[197] ‘Government Response to the Product Safety Review and Next Steps’ (Department for Business and Trade, 5 November 2024) <www.gov.uk/government/consultations/smarter-regulation-uk-product-safety-review/outcome/government-response-to-the-product-safety-review-and-next-steps> accessed 9 July 2025.

[198] ‘Reality Check’ (Ada Lovelace Institute, 2025) <https://www.adalovelaceinstitute.org/report/reality-check/> accessed 23 September 2025.

[199] ‘Futures, Foresight and Emerging Technologies’ (Gov.uk) <www.gov.uk/government/groups/futures-and-foresight> accessed 4 July 2025.

[200] ‘Emerging Technology Research Hub’ (FCA, 28 March 2023) <www.fca.org.uk/firms/emerging-technology-research-hub> accessed 4 July 2025.

[201] ‘DRCF Immersive Technologies Foresight Paper’ (DRCF) <www.drcf.org.uk/__data/assets/pdf_file/0027/273195/DRCF-Immersive-Technologies-Foresight-Paper.pdf> accessed 9 February 2024.

[202] Health and Safety at Work Etc. Act 1974 <www.legislation.gov.uk/ukpga/1974/37/section/15> accessed 14 April 2025.

[203] ‘The Health and Safety at Work Act Explained’ (British Safety Council) <www.britsafe.org/training-and-learning/informational-resources/the-health-and-safety-at-work-act-explained> accessed 31 July 2025.

[204] Guilherme Gonçalves and others, ‘Systematic Review of Comparative Studies of the Impact of Realism in Immersive Virtual Experiences’ (2022) 55(6) ACM Computing Surveys 115 <https://dl.acm.org/doi/full/10.1145/3533377> accessed 12 May 2025; Unnikrishnan Radhakrishnan, Francesco Chinello and Konstantinos Koumaditis, ‘Investigating the Effectiveness of Immersive VR Skill Training and Its Link to Physiological Arousal’ (2023) 27 Virtual Reality 1091.

[205] ‘The Health and Safety at Work Act Explained’ (British Safety Council) <www.britsafe.org/training-and-learning/informational-resources/the-health-and-safety-at-work-act-explained> accessed 31 July 2025; Health and Safety at Work Etc. Act 1974 <www.legislation.gov.uk/ukpga/1974/37/section/15> accessed 14 April 2025.

[206] Nuala Polo and Jacob Ohrvik-Scott, ‘An Eye on the Future’ (Ada Lovelace Institute, 29 May 2025) <https:// /> accessed 19 June 2025.

[207] ‘Safety of Domestic Virtual Reality Systems’ (Department for Business and Trade, 2 October 2020) <www.gov.uk/government/publications/safety-of-domestic-virtual-reality-systems> accessed 30 July 2025.

[208] ‘A Pro-Innovation Approach to AI Regulation’ (GOV.UK) <https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper> accessed 7 December 2024.

[209] ‘Reality Check’ <https://www.adalovelaceinstitute.org/report/reality-check/> accessed 23 September 2025.

[210] Cami Rincon and Jorge Perez, ‘What Are Immersive Technologies?’ (Ada Lovelace Institute, 5 March 2025) <www.adalovelaceinstitute.org/resource/immersive-technologies-explainer/> accessed 17 June 2025.

[211] ‘AI Ethics and Governance in Practice: AI Sustainability in Practice Part One: Foundations for Sustainable AI Projects’ (The Alan Turing Institute) <www.turing.ac.uk/news/publications/ai-ethics-and-governance-practice-ai-sustainability-practice-part-one-foundations> accessed 12 June 2025.

[212] ‘HUDERIA: New Tool to Assess the Impact of AI Systems on Human Rights – Portal – Www.Coe.Int’ (Portal) <https://www.coe.int/en/web/portal/-/huderia-new-tool-to-assess-the-impact-of-ai-systems-on-human-rights> accessed 12 June 2025.

 

[213] ‘HUDERIA: Risk and Impact Assessment of AI Systems’ (Artificial Intelligence) <www.coe.int/en/web/artificial-intelligence/huderia-risk-and-impact-assessment-of-ai-systems> accessed 28 July 2025.

[214] ‘Understanding Artificial Intelligence Ethics and Safety’ (GOV.UK) <https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety> accessed 8 November 2023

[215] ‘HUDERIA: New Tool to Assess the Impact of AI Systems on Human Rights’ (Council of Europe) <www.coe.int/en/web/portal/-/huderia-new-tool-to-assess-the-impact-of-ai-systems-on-human-rights> accessed 12 June 2025.

Related content