Skip to content
Blog

Why XR developers in the UK need accessible and authoritative guidance

The role of regulators in mapping existing laws on to the risks of immersive technologies

Catherine Allen , Emma Barrett

3 October 2025

Reading time: 12 minutes

Teenage boy holds virtual reality glasses.

The development of immersive technologies such as augmented reality (AR) and virtual reality (VR), often grouped under the umbrella term ‘eXtended reality’ (XR), is no longer a fringe pursuit.[1] Along with the exciting benefits XR may bring when embedded in homes, classrooms and workplaces, there can be complex risks to physical safety, psychological wellbeing and wider society. In theory, the production and use of XR are regulated under UK online safety and data protection legislation – including the Online Safety Act and the UK GDPR – but how easy is it for XR developers to apply this in their work?

Earlier this year, we – the authors of this piece – interviewed a group of developers about their understanding of XR-related risks and the measures they take to mitigate them. All interviewees spoke passionately about building experiences that are net positive for society. They try to adopt proactive measures to address various types of risk, but many of them said it is difficult to find clear, tailored guidance to apply existing laws to their work and implement effective harm mitigation tools.

UK regulators, particularly Ofcom and the ICO, have a critical role to play. Regulators’ websites provide clear, practical guidance for ‘2D’ online services, and ‘3D’ XR now requires the same support.

Not just ‘on the horizon’

Confusion about XR risk mitigation may be due to the way this technology has been viewed as something in the realms of sci-fi, relevant to the future rather than the present. That view is now outdated.

Adoption of AR smart glasses and VR headsets has been rising steadily over the last decade, fuelled by yearly multibillion-pound investments.[2] Devices are easily accessible and increasingly affordable. Millions of people in the UK now have access to a VR headset at home or use one at work or school. According to Ofcom’s 2024 Online Nation Report, four per cent of the UK adult population regularly play games on a VR headset. These usage figures are comparable to those for e-scooters or dating apps, which are not activities whose risks we would expect to be ignored.

What are the risks?

The issues associated with XR range from safety hazards, like nausea and physical injury, to privacy violations. The latter has by far received the most research attention, likely due to the vast amounts of data generated whenever a person is using VR or AR.

Headsets and the applications that run on them can capture a user’s VR location and any personal details they may have shared deliberately. XR technologies can register a wide range of biometric data, including through tracking eye movements. This data can be used to infer or make conjectures about a person’s physical and mental health, cognitive abilities and, potentially, personality traits. The front-facing cameras embedded in VR headsets and AR glasses capture a user’s surroundings too, including pets, children, household valuables or confidential papers on their desk.

What happens to all this data? Many XR applications have opaque or overly permissive privacy policies, making it difficult for a user to know how their information will be stored or exploited. Will it be shared with data brokers? Used for targeted advertising? Disclosed to the authorities? And even if bystanders are aware that AR glasses may be capturing images of them, what redress do they have against the person wearing the glasses, if the images are also stored and kept over time?

Even if platforms and app developers have robust privacy policies, sensitive data is a honeypot for malicious actors. Criminals targeting XR users are helped by the relative lack of maturity of hardware and software cybersecurity measures. Users who practice good cybersecurity in traditional online spaces may be less alert to cyber-attacks and social engineering in XR.

A 2022 Microsoft Security blog predicted that rather than using fake emails and websites, criminals might conduct phishing attacks via fake VR avatars impersonating a person’s boss or a bank manager. More recently, in a lab-based ‘proof of concept’ attack on VR headsets, researchers produced software that can alter what users see in chatrooms and websites, tricking them into entering sensitive information on fake websites and changing what other users’ avatars say to them.

This was only a lab-based demonstration, but the practice of side-loading XR applications and bypassing established app stores makes the risk of installing this sort of criminal malware very concrete, as it may be hidden within an apparently benign application.

Other crimes that XR technologies might facilitate are identity theft, fraud, and child sexual exploitation and abuse. Bullying, discrimination and harassment are common in social VR spaces, while human or automated moderation remains challenging. Organisations with a presence in virtual reality or augmented reality spaces may need to prepare themselves for digital vandalism, or their app being used for money laundering, fraud and theft of digital intellectual property.

Broader societal harms that XR may amplify include the exploitation and manipulation of VR spaces by conspiracy theorists and hostile state actors to spread misinformation, and by extremist organisations for radicalisation, training and planning.

While some of these risks may seem an amplification of those encountered in non-XR online spaces, this is not the case. Besides the sheer amount of biometric data collected in 3D spaces, which changes privacy issues in qualitative ways, the immersive nature of XR makes the experience of harm more visceral. And even when XR-related risks are similar to those posed by other online spaces, their unfamiliar 3D guise might obfuscate how to apply legislation to them and the route to redress remains opaque.

For instance, the legal debate about the application of existing laws on physical assault to XR applications is yet to settle and lacks case law. When it comes to sexual assault, regulation might be worded in terms of physical interaction, but an assault while having a virtual reality experience may affect – and traumatise – the victim in a similar way.

In general, VR interactions are ephemeral unless one party records them and, even if they do, gathering evidence to investigate any kind of crime remains difficult.

The developer mindset and their concerns

The developers who volunteered for our study were all working for small and medium-sized enterprises, with the exception of an NHS delivery team. Their organisations offer examples from a range of XR applications and funding models, including client-funded AR experiences for real-life venues, investor-funded consumer VR apps, VR apps for patient wellbeing, and educational XR experiences paid for by institutional clients, such as universities.

A running theme in the interviews was a deeply held desire to build safe, ethical products. The interviewees seemed to embrace a personal duty to ‘do no harm’. One said: My motivation comes from a belief that the technology can be used for good… these magic moments of joy.’

Another developer spoke about how they decided not to include any violent content from their company’s XR game, differing from their original proposal, even though they were aware that this decision might negatively impact the app’s revenue. I don’t want to create something that advocates violence in an unnecessary way, especially within a space that’s so realistic… that wasn’t something I wanted to put into the world.’

The interviewees showed a strong grasp of experiential risks, including physical discomfort, psychological distress and accessibility challenges. Developers described deliberately capping VR experience lengths to avoid nausea and disorientation, and implementing ‘panic buttons’ or safe virtual spaces for users who feel overwhelmed.

The study also found evidence of a good level of awareness of risks around representation. For example, one developer described consulting with historians in an attempt to produce sensitive and accurate portrayals of colonial history. Accessibility was another central topic in the interviews, with several developers designing their applications for seated use by default.

Knowledge was patchier in areas like privacy law and online safety regulation. Several interviewees said that UK frameworks still read harm through a 2D, web-first lens, leaving grey areas for embodied, spatial tech. The collection of voice data in immersive environments, for instance, presents a unique privacy risk. While voice cloning exists in 2D, its application in XR is particularly potent. A user could be surreptitiously solicited to utter a specific phrase in an open multi-user environment. The phrase could then be cloned with AI tools and used to impersonate a trusted colleague in a virtual meeting, or for financial fraud.

One interviewee, employed by a company whose good practice for cybersecurity was recognised through the Cyber Essentials accreditation – the UK government scheme to assess whether companies are following good practice for cybersecurity – said the scheme didn’t focus heavily on the XR element. It was more [about] our internal systems: where data comes in, and how that is handled and managed.

Several developers said that an element of their approach to privacy risks was simply to avoid explicitly collecting personal data (‘we haven’t made anything which requires users to create an account and for us to store any of their data’). However, it is worth noting that even anonymous XR users are identifiable through biometric information collected for analytics purposes or as part of standard system operations.

The lack of XR-specific external guidance exposes developers to legal and reputational consequences. Interviewees apply safety-by-design principles to address the risks they are aware of, even in the absence of formal requirements. However, most of them admitted to having a limited understanding or awareness of the whole range of risks that their applications might pose. Many work under severe resource constraints. While companies’ user bases are growing, SME developers still operate under new and volatile conditions. This means less time and money to invest in harm mitigation.

Crucially, all interviewed developers argued that they could achieve far more with clearer, XR-specific guidance. Regulators and/or government departments could methodically go through existing legislation and guidance and create XR-specific interpretations. Such guidelines would not only shape design decisions but also help developers push back when clients or investors prioritise speed, profit, and cost-saving over safety: ‘sometimes it’s useful to be able to refer to guidelines to defend a certain position’.

What role do regulators have?

The interviewees emphasised that responsibilities are shared – platforms, clients and investors all have parts to play – but they looked to regulators for a catalytic, coordinating role to anticipate harm. Regulators are uniquely placed not just to enforce laws but also to guide compliance with them. They are independent of commercial interests, ready to take pragmatic and proportionate steps to protect users while not stifling tech innovation, and capable of building international consensus.

Although UK regulators have not completely neglected immersive technologies, public documents suggest that applying existing regulations to XR is seen as ‘horizon scanning’ rather than a ‘here and now’ issue. Even in its otherwise robust application of the Online Safety Act, Ofcom’s approach to XR is to treat it as any other online technology.

One consequence is that XR developers searching regulators’ web pages for practical guidelines or advice will mostly draw a blank. The only exception to this, at present, is the general digital guidance issued in February 2025 by the Medicines and Health products Regulatory Agency, which was developed as part of a Wellcome Trust funded project. The project focused on clarifying the regulation and evaluation of digital mental health technology, and explicitly included XR technologies guidance within it.

In the broader XR ecosystem, trade organisations and scholarly bodies across the world have developed draft frameworks of standards and codes of practice for safe, secure and ethical XR development.[3] Most such frameworks are the result of lengthy consultation and often cover a great deal of important ground. The developers interviewed during the study, however, had no awareness of these initiatives and some were quite dismissive of the role of trade associations. In the words of a developer, ‘to be quite frank, you know, it’s an echo chamber’. For a small developer, the existence of several, different, often repetitive guidance documents makes applying them to their work cumbersome and time-consuming.

What do XR developers need?

XR developers need clear, accessible guidance: an easy-to-find authoritative resource that spells out how existing laws map onto immersive technologies and what ‘good practice’ looks like for mitigating risks. The guidance should include practical examples and checklists that organisations of any size can follow. This would also enable the public to hold companies to account, as end users would know what they should expect from XR products.

XR can meaningfully affect wellbeing, security and safety, for good and for ill. In other professions where the public may be at risk, such as medicine, engineering, and financial advice, we depend on qualifications, regulations and licensing to verify the trustworthiness and competence of those who work in these fields. Authorities across the world are finally considering the safety of traditional online services. We must ensure these frameworks are applied effectively to XR as well. Acting now enables those who use it to seize the benefits of XR while protecting us all.


The authors thank the developers who generously gave their time to participate in this research, and acknowledge the support of the University of Manchester and the Simon Industrial Fellowship in making this work possible. To find out more or to participate in future studies, contact Catherine Allen or Emma Barrett.


[1] Ball, M. (2024). The metaverse: Fully revised and updated edition: Building the spatial internet (2nd edn). Liveright Publishing.

[2] Mobile AR has been big business for more than a decade and continues to be. AR smart glasses are rapidly taking off: Meta’s RayBans were first to market but many other companies are now piling in, including https://www.roadtovr.com/amazon-jayhawk-smart-glasses-report-meta-hypernova-celeste/ https://blog.google/products/android/android-xr-gemini-glasses-headsets/ https://www.bloomberg.com/news/articles/2025-05-22/apple-plans-glasses-for-2026-as-part-of-ai-push-nixes-watch-with-camera. VR investment is also steady. It might seem to have been declining only because initial market predictions were wildly inflated.

[3] For reference, see: https://www.standards.org.au/news/metaverse-risks-consumer-safety-requirements-highlighted-in-new-whitepaper, https://principles.xrguild.org/, https://metaverse-standards.org/domain-groups/ethical-principles-for-the-metaverse/, https://standards.ieee.org/beyond-standards/ethical-considerations-of-extended-reality-xr/.

Related content