Skip to content
Blog

Will the UK AI Bill protect people and society?

Assessing the credibility of forthcoming legislative proposals

Nuala Polo

29 August 2025

Reading time: 27 minutes

Hands, construction worker and tablet for meeting, review and team with safety report at site. People, engineer and architect with touchscreen, checklist and application for survey at property

Since coming into power, the current UK government has shown unchecked optimism about AI’s potential, shifting its focus from AI safety to AI adoption. It has committed £2 billion to ‘unleashing’ AI by 2030, but has failed to set out the corresponding safeguards needed to manage the risks outlined in its International AI Safety Report 2025.

On top of longstanding risks like discrimination and job displacement, harms posed by AI assistants, with the potential to misinform, manipulate and behave inappropriately, are rapidly coming to the fore. Despite these risks, the UK lacks a comprehensive legal framework to govern AI systems. Many of these harmful capabilities arise during the design and development of AI systems – where few rules apply and few regulators have oversight. And those most able to manage these risks – AI developers – are not incentivised to do so.

This lack of regulation stands in stark contrast to public attitudes. A nationally representative survey from the Ada Lovelace Institute and the Alan Turing Institute found that 72% of the UK public would feel more comfortable with AI if it was regulated – an increase of ten percentage points from 2022/23. However, this demand for oversight has not yet been met with policy action.

While media outlets suggest that consultation on a UK AI Bill may be imminent, given the government’s deregulatory posture, there are concerns that any proposals will fall short of delivering the robust regulation the public expects. So how do we assess what comes next?

For a technology that is already shaping our economy, society and democracy, we can – and should – apply the same expectations that guide regulation in other domains like food, energy, transport and medicine. We should consider:

  • Who gets to decide what a ‘safe’ AI system looks like?
  • Will the government and regulators know what risks AI systems pose before they are deployed?
  • What powers will the government and regulators have to intervene when something goes wrong?
  • Are AI developers and intermediaries (like model hosts) incentivised to manage risks themselves or do they simply pass them on?
  • Will the government and regulators know the costs of building and using AI, so they can make informed trade-offs about how and when to use it?

These are the questions we would ask of regulatory frameworks in other sectors, and for other technologies with the power to shape how people live, work and interact with each other.

The table below asks these questions across four critical sectors: aviation, financial services, pharmaceuticals and food safety, and contrasts the safeguards in place there with those currently available to protect against AI risks and harm.

You would expect that an aeroplane is tested for safety before it flies, and that a new drug is put through clinical trials before it is brought to market – why not the same for AI?

Feature Aviation Financial services Pharmaceuticals Food safety Foundation models (before AI Bill)
Proactive risk monitoring

Are companies incentivised to seek information about risks to promote a safer ecosystem?

Yes[i] Yes[ii] Somewhat[iii] Somewhat[iv] Voluntary. If companies conduct testing, there are no incentives to share results with the public, government or regulators.
Safety standards

Are there safety standards for products/services?

Yes[v] Yes[vi] Yes[vii] Yes[viii] Voluntarily applied and only exist for some risks. Companies’ safety standards are not visible to government/regulators despite commitments taken during the 2023 AI Safety Summit.
Independent standards
Are the bodies who sets safety standards independent?
Yes[ix] Yes[x] Yes[xi] Yes[xii] No.
Market entry authorisation

Is there a watchdog/regulator who checks that products meet these standards before they go to market?

Yes[xiii] Yes[xiv] Yes[xv] Somewhat[xvi]

 

No general government or regulator powers to prevent sale/supply of unsafe foundation models. Very narrow exceptions in specific domains (e.g. if a foundation model is explicitly developed as a medical device).
Post-market monitoring

Are products/services subject to additional scrutiny once they have been released to the public?

Yes[xvii] Yes[xviii] Yes[xix] Somewhat[xx] Voluntary. Companies may conduct additional testing, but there are no incentives to share results with the public, government or regulators.
Independent regulator

Is there a dedicated regulator who provides oversight in this domain?

Yes[xxi] Yes[xxii] Yes[xxiii] Yes[xxiv] No regulator for general-purpose AI. Existing regulators may have a role to play in oversight of downstream applications.
Enforcement powers
Do regulators have power to revoke market access when products/services are unsafe?
Yes[xxv] Yes[xxvi] Yes[xxvii] Yes[xxviii] No general government or regulator powers to withdraw unsafe foundation models from the market. Very narrow exceptions in specific domains (e.g. if a foundation model is explicitly developed as a medical device).
Accountability measures

Is there someone/an entity that can be held accountable for risks/harms posed by a product/service?

Yes[xxix] Yes[xxx] Somewhat[xxxi] Yes[xxxii] No.
Transparency/reporting

requirements

Are companies required to share information with government/regulators about their products/processes?

Yes[xxxiii] Yes[xxxiv] Yes[xxxv] Yes[xxxvi] No.
Routes for redress

Are there ways for individuals who have been harmed by products/services to seek redress/hold someone accountable?

Yes[xxxvii] Yes[xxxviii] Somewhat[xxxix] Yes[xl] No.

 

As we look towards the prospect of a consultation and AI Bill, we encourage parliamentarians, journalists and civil society to ask these questions, and examine whether the government’s proposals address the growing list of everyday harms we see in the news each week, and whether they equip the government to manage these risks, at the source.

Five tests for an effective AI Bill

Below, we set out five tests for evaluating whether the government’s future proposals on AI regulation include the safeguards needed to protect the public.

1) Who gets to decide what a ‘safe’ AI system looks like?

A fundamental test for AI legislation is who gets to define what ‘safe’ means? Who determines which risks matter, how those risks are assessed and who is responsible for evaluating them? If these decisions are left to the companies developing AI, they are likely to reflect commercial interests, rather than the public interest.

Right now, major tech companies developing AI systems set their own safety standards. A recent Reuters investigation revealed the consequences of this approach, finding that Meta’s safety policy for its generative AI systems permits behaviours most would consider clearly harmful. The policy allows AI chatbots to engage in ‘sensual’ conversations with children, spread false medical information and make demeaning statements about people with protected characteristics. These outputs were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist.

This is unsurprising. Tech companies are driven by profit; their primary aim is to maximise engagement with their products. In this case, Meta designed chatbot behaviours to encourage prolonged user interaction, even if that meant sanctioning unsafe outputs.

It is concerning that no regulator in the UK is currently explicitly empowered to review companies’ internal AI policies, address their adequacy or mandate changes if and when they are unsafe. This leaves the public reliant on the goodwill and judgement of private actors who often operate behind closed doors and who are driven by commercial incentives.

A credible AI Bill must establish powers for government and regulators to define what ‘safe’ looks like. This includes setting safety standards, thresholds for acceptable risk and rules for how harms must be assessed and mitigated. Regulators must be equipped to determine which risks matter, how they are measured and what constitutes a failure to protect the public. This includes the authority to require disclosure of internal safety policies and risk assessments, and impose penalties where systems fall short of legal standards.

2) Will the government and regulators know what risks AI systems pose before they are deployed?

A second test for an AI Bill is whether it ensures that government and regulators understand the risks AI systems pose before they are released to the public. Currently in the UK, there is no legal requirement for companies to conduct or share pre-deployment safety testing of AI systems. This leaves regulators, policymakers and the public in the dark about whether models contain dangerous biases, produce harmful outputs or behave unpredictably in high-stakes contexts.

Yet the harms are already playing out. Grok, a chatbot developed by xAI, recently produced antisemitic content – praising Adolf Hitler, and referring to itself as ‘MechaHitler’. Grok has also promoted conspiracy theories about ‘White genocide’ in South Africa and generated Nazi imagery, highlighting how poor safeguards and system instructions can lead to dangerous outputs. These weren’t isolated incidents; they reflect what happens when models are released without adequate pre-launch testing and without safeguards that prioritise safety over provocation.

And the risks aren’t limited to toxic language. Bias in foundation models can also propagate through downstream applications. Consider an AI-enabled hiring tool, built on top of a large language model (LLM) like ChatGPT. If the foundation model contains biases, the recruitment tool will adopt these biases and may unfairly prioritise certain candidates, impacting job prospects for large portions of the population. This is already occurring: studies have shown that resume-screening LLMs disadvantage Black men, reinforcing existing inequalities in the labour market. These harms are not only foreseeable, but are often preventable, if companies are required to test their models before deployment, and subject them to regulatory scrutiny.

An AI Bill should mandate pre-deployment testing for general-purpose AI systems. This should include regulatory requirements for evaluations before a model can be publicly released, like pre-market authorisations in other sectors (e.g. pharmaceuticals). These evaluations should assess system behaviour across different demographic groups, test for disparate impacts and include transparency obligations around training data, fine-tuning techniques and prompt engineering. The EU’s General-Purpose AI Code of Practice sets out similar obligations, requiring developers to conduct risk identification, risk analysis and model evaluation prior to deployment. If the UK fails to do the same, the country risks becoming a testing ground for unsafe AI, where harms are only discovered once the damage is done.

3) What powers will the government and regulators have to intervene when something goes wrong?

A third test for an AI Bill is whether it gives the government and regulators powers to act when AI models cause harm. At present, UK regulators lack the legal authority to compel the modification of AI models or their removal from the market, even when they are linked to serious consequences. Without these powers, the government and regulators are left watching from the sidelines as AI shapes public discourse and spreads harmful content.

AI plays a central role in shaping what we see online. Internet platforms use AI to rank and filter content, often based on metrics around relevance and engagement. However, prioritising engagement can inadvertently amplify harmful content and misinformation. This design choice can have devastating consequences – as seen in the 2024 Southport riots. After fatal stabbings at a children’s dance class, false claims circulated online about the attacker’s identity and asylum status. These claims, amplified by recommendation algorithms, fuelled anti‑immigrant sentiment and led to violence targeting mosques and migrant housing.

Despite these harms, at present the UK government has no legal tools to intervene. Regulators cannot force companies to disable dangerous models or withdraw them from the market, even temporarily. This legal vacuum leaves the government unable to respond to real-time harms or prevent similar incidents from recurring.

An AI Bill must empower UK regulators to intervene when models pose serious risks to individuals or society. This should include clear legal authority to require modifications to models that generate harmful outputs, enforce the temporary or permanent withdrawal of AI systems from the UK market, and compel companies to change unsafe prompts, system instructions or training data as preconditions for market re-entry. Without these powers, regulators will remain unable to prevent AI-driven harm, no matter how predictable or severe.

4) Are AI developers and intermediaries (like model hosts) incentivised to manage risk themselves or do they simply pass them on?

A fourth test for an AI Bill is whether it creates incentives for developers and platform operators to take responsibility for risks posed by their technologies. At present, many AI developers and intermediaries (like models hosts or platform providers) shirk responsibility for foreseeable risks inherent in their models or services. Rather than addressing these risks at the source, through safer design, testing and usage policies, they shift liability onto smaller businesses who purchase and deploy these systems, and who are then burdened with the legal and financial risks.

At present, there are no clear incentives or legal frameworks requiring model developers to take responsibility for the harmful outputs of their systems. Consider AI assistants marketed as companions, coaches or even therapists. Their anthropomorphic design can foster trust and emotional dependence, but it can also have disturbing and dangerous effects, like manipulation, psychological dependence and coercive influence.

In one reported case in Florida, an AI chatbot allegedly encouraged a teenager who was experiencing depression to die by suicide. In another case in Texas, the same system reportedly encouraged a teenager to murder his parents after they restricted his screen time. While lawsuits have been brought forward in the US against the companies who developed these systems, there are no clear legal frameworks designating who should be held liable for the output of these chatbots, if anyone at all. What’s more, the developers have continued to dodge responsibility, aiming to dismiss the claims on several grounds, including the argument that the chatbots’ outputs constituted free speech.

Likewise, there is little incentive for AI intermediaries to take responsibility for harmful outputs produced by the models they host. For example, research from the Oxford Internet Institute reveals the proliferation of deepfake generation tools on popular hosting platforms, like Hugging Face and Civitai. These models can cause serious harms spanning non-consensual imagery, revenge porn and child sexual abuse materials, disproportionately targeting women and girls. However, while UK law criminalises the sharing of such content, the platforms that enable access to these tools frequently avoid liability by presenting themselves as neutral intermediaries.

An AI Bill must establish accountability mechanisms at both the model and platform levels. Developers should be legally liable for harms caused by their models, particularly when those harms arise from manipulation, deception or exploitation. Meanwhile, hosting platforms must not be treated as passive intermediaries; they should bear responsibility for preventing misuse by enforcing content moderation and usage restrictions, and providing clear pathways for user complaints and redress. This includes enabling users harmed by AI-generated content, like deepfakes, to identify liable platforms and seek justice or compensation.

5) Will the government and regulators know the costs of building and using AI, so they can make informed trade-offs about how and when to use it?

The last test for any AI Bill will be whether it incentivises developers and deployers to be transparent about their systems. To make informed choices about how and where AI systems are used, we need more information about the risks they pose and the resources they consume. However, there are currently no legal requirements for companies to disclose the safety, environmental or social impacts of their systems. As a result, the government and regulators are unable to weigh the consequences of deployment, protect public interests or plan responsibly for the future.

This is especially alarming given the scale of harm at stake. The serious risks posed by AI include exacerbating public health crises, enabling cyberattacks and bioweapon design, and perpetuating bias in recruitment, healthcare and financial services. AI also presents systemic risks, such as destabilising democracies and accelerating labour market disruption. Yet, even as these risks grow, AI developers are under no obligation to test for them, let alone disclose them. In industries like aviation or pharmaceuticals, safety testing and reporting are mandatory and foundational. When an aeroplane part fails or a medicine causes unexpected side effects, regulators are notified. With AI, we have no equivalent system in place.

What’s more, we are not tracking the resource burden of AI, despite mounting evidence that its infrastructure could strain national utilities. Training and running large AI models requires huge amounts of energy and water – some data centres consume between 11 and 19 million litres of water per day, comparable to the needs of a town of 50,000 people. And yet companies are not required to disclose their water or energy usage. This leaves public bodies unable to plan for critical infrastructure needs or ensure equitable access to scarce resources. UK water regulators have already raised concerns about future water shortages, which are hard to address when usage data is treated as a trade secret.

An AI Bill must introduce reporting obligations for developers of high-risk and general-purpose AI systems. At a minimum, companies should be required to disclose:

  • known and foreseeable risks to health, safety, rights and democratic processes;
  • environmental and infrastructure impacts, including energy consumption, water usage, and power sources;
  • and the steps taken to assess, mitigate, or avoid these harms.

Without this information, governments cannot make responsible decisions, regulators cannot enforce safeguards and the public cannot trust that trade-offs are being made.

From obstacle to opportunity

If the UK fails to regulate AI effectively, the government risks being left without sight and without tools to manage AI’s rapidly evolving challenges. The consequences won’t just harm individuals – they will undermine the country’s ability to adopt and benefit from the technology. Without clear, enforceable standards set by public authorities, developers are left to self-regulate, deployers face uncertainty and liability, and users are exposed to avoidable risks. This lack of oversight erodes trust, creates market confusion and slows adoption, as organisations hesitate to deploy systems when the risks and responsibilities remain unclear.

The UK shouldn’t see this as an unsurmountable challenge, but as a valuable opportunity to establish an AI regulatory framework that supports safe, confident adoption. This would offer developers clear guidelines, give deployers confidence that risks are properly managed and equip regulators with the tools to monitor, investigate and intervene when systems pose harm. Done right, regulation becomes not just a way to manage risks, but a foundation for innovation. This is how we unlock AI’s full potential in a way that genuinely benefits society while protecting rights, safety and public trust.


Endnotes

[i] Airlines are subject to mandatory safety reporting, which fosters a safety culture. By recording and disseminating known risks, actors alert other entities of potential harms, in the hope of preventing them. The Civil Aviation Authority also has a safety risk management programme, which identifies trends and emerging risks influencing the UK aviation system. See: Aviation Safety Reporting | EASA, Safety risk management process | UK Civil Aviation Authority.

[ii] The Bank of England conducts annual stress tests of the largest financial institutions to identify potential risks/vulnerabilities. They also produce biannual financial stability reports, assessing the stability and resilience of the UK’s financial system. See Stress testing | Bank of England and Financial Stability Report – July 2025 | Bank of England.

[iii] In pharmaceuticals, voluntary safety reporting contributes to a safety culture. These include the Medicines and Healthcare products Regulatory Agency’s (MHRA) ‘Yellow Card Scheme’: an online database that collates voluntary reports of adverse drug reactions from healthcare professionals and the public, to ensure they are acceptably safe for patients and users. In the past, the MHRA has also supported futures activities through the Innovation Accelerator, which conducted horizon scanning to identify opportunities to support the development of innovative products. See Yellow Card | Making medicines and medical devices safer, and Horizon Scanning Case Study: What is an Actionable Horizon Scanning Signal (AHSS)? – Case study – GOV.UK.

[iv] The Food Standards Agency has previously undertaken foresight exercises to anticipate systemic risks. They currently provide funding to the Food Safety Research Network to conduct horizon scanning exercises that anticipate novel risks. See: FSA 22-06-06 – Foresight Function and Horizon Scanning – Annual Update to the Board | Food Standards Agency and FSA extends support for Food Safety Research Network to anticipate new risks and help protect public health | Food Standards Agency.

[v] Relevant international safety standards include European-wide safety regulations set by the European Aviation Safety Agency – an agency of the European Commission. See EASA | European Union Aviation Safety Agency.

[vi] The Financial Conduct Authority sets conduct rules and standards, the Prudential Regulation Authority sets prudential standards for banks and insurers, and the Financial Reporting Council sets standards for auditing and ethical practices. The Basel Committee on Banking Supervision sets global standards for the prudential regulation of banks, while the Financial Stability Board works with international standards-setting bodies to create the Compendium of Standards. See FCA Handbook – FCA Handbook, UK Accounting Standards, The Bank of England PRA | Prudential Regulation Authority Handbook & Rulebook The Compendium of Standards – Financial Stability Board, and The Basel Committee – overview.

[vii] British Pharmacopoeia provides official standards for pharmaceutical substances and medicinal products. British Pharmacopoeia Commission is an advisory non-departmental public body, sponsored by the UK Department of Health and Social Care. There is also a breadth of quality standards, manufacturing process standards and quality control standards that apply to pharmaceutical products. See British Pharmacopoeia Commission – GOV.UK and Pharmaceutical regulation in the UK | Ada Lovelace Institute.

[viii] The Food Safety Act (1990) provides the framework for all food legislation in Great Britain and is supported by a breadth of legal standards for labelling and composition of food products such as bottled water, milk and meat. See Key regulations | Food Standards Agency and Food standards: labelling and composition – GOV.UK.

[ix] See endnote 5.

[x] See endnote 6.

[xi] See endnote 7.

[xii] Relevant standards are set by the Food Standards Agency and the Food Standards Scotland, in collaboration with government and local authorities. See Key regulations | Food Standards Agency and Our remit | Food Standards Scotland.

[xiii] To fly, aircrafts must receive a certificate of airworthiness. This is achieved when aircrafts are shown to conform to the certificated type design standards and are in a condition for safe operation. See Certificates of Airworthiness | UK Civil Aviation Authority.

[xiv] Financial companies and their business models need to be authorised by the regulator. This means that the company applying for authorisation must show how it will be governed, the kinds of activities it intends to undertake and how it intends to ensure that these activities will comply with regulatory principles such as the Consumer Duty. See New rules? | Ada Lovelace Institute and Sample business plan | FCA.

[xv] All medicines placed on the market in the UK require marketing authorisation, granted by the Medicines and Healthcare products Regulatory Agency. See Pharmaceutical regulation in the UK | Ada Lovelace Institute.

[xvi] Certain food and feed products, called regulated products, must go through a risk analysis process, and require market authorisation before they can be sold. Food and feed products, which are not individually tested before being sold, are randomly inspected by food safety officers to ensure that products comply with regulation. See Regulated products application guidance | Food Standards Agency.

[xvii] Aircrafts are subject to pre-flight inspections before every flight to check that the aircraft performance hasn’t changed/degraded over time, ensuring continuing airworthiness. Aircrafts that have been awarded a Certificate of Airworthiness must have this validated annually, with an Airworthiness Review Certificate. See Airworthiness review certificates ARC | UK Civil Aviation Authority and AMC M.A.301(a) Continuing airworthiness tasks.

[xviii] Monitoring focuses on assessing the stability of financial markets and the continued stability of financial institutions. See New rules? | Ada Lovelace Institute.

[xix] Monitoring focuses on identifying harmful incidents and/or further information on whether a drug works as intended. Schemes include the ‘Yellow Card Scheme’ (see endnote 3), and the ‘Black Triangle Scheme’. The latter allocates some medications with ‘Black Triangle status’ when they are subject to ‘intense monitoring’. See Yellow Card | Making medicines and medical devices safer and The Black Triangle Scheme (▼ or ▼*) – GOV.UK.

[xx] All businesses are legally required to report to the Food Standards Agency and Food Standards Scotland if they have reasons to believe that a food or feed product placed on the market is unsafe. Certain regulated products (e.g., GMOs and feed additives) also require post-market monitoring as part of their terms of authorisation. See Reforms to the market authorisations process for regulated products | Food Standards Agency.

[xxi] The Civil Aviation Authority is the UK’s dedicated aviation regulator. See Our roles and responsibilities | UK Civil Aviation Authority.

[xxii] The Bank of England, the Prudential Regulation Authority and the Financial Conduct Authority are the UK’s three main financial regulators. See New rules? | Ada Lovelace Institute.

[xxiii] The Medicines and Healthcare products Regulatory Agency, supported by the Commission on Human Medicines and the National Institute for Health and Care Excellence are the UK’s pharmaceutical regulators. See New rules? | Ada Lovelace Institute.

[xxiv]The Food Standards Agency and Food Standards Scotland are the UK’s food safety regulators. See Who we are | Food Standards Agency and Food Standards Scotland.

[xxv] Regulators are empowered to ground aircrafts for safety reasons. The aircrafts cannot be flown until a new Certificate of Airworthiness is granted, following additional safety testing. See Grounded aircraft | UK Civil Aviation Authority.

[xxvi] Where serious breaches of the Financial Services and Markets Act are suspected, regulators have powers to launch formal investigations into potential wrongdoing. If a breach is subsequently proven, it could lead to significant sanctions against a business or its individuals. The Financial Conduct Authority and Prudential Regulation Authority can also remove or restrict a firm’s authorisation to undertake regulated activities or prohibit individuals from working in regulated financial services in the future. See Enforcement Information Guide.

[xxvii] The Medicines and Healthcare products Regulatory Agency can remove pharmaceuticals from the market due to safety or efficacy concerns. See A Guide to Defective Medicinal Products.

[xxviii] As a result of a food incident, a food product may have to be withdrawn or recalled. See Food incidents, product withdrawals and recalls | Food Standards Agency.

[xxix] Liability for an aviation incident depends on the type of incident, but the airline is often held liable for in-flight injuries caused by unusual events, while aircraft owners face strict liability for surface damage to property. The Civil Aviation Act 1982 provides a framework for this, establishing that owners are liable unless the damage was caused by the victim’s negligence. Under the Montreal Convention 1999, airlines are also strictly liable for passenger injuries on flights as long as the injury results from an unexpected or unusual event. See Civil Aviation Act 1982 and IATA – Montreal Convention 1999.

[xxx] The Financial Conduct Authority enforces an individual accountability regime for financial service employees and senior managers under which sanctions, including fines or revocation of an individual’s approval to carry out a senior management function, can be levied against employees and managers that fail to act in line with conduct rules. See New rules? | Ada Lovelace Institute.

[xxxi] Doctors and pharmacists have a duty under the tort of negligence. The Consumer Protection Act 1987 also establishes ‘strict liability’, meaning that the producer of a product is liable for any defects in the product. However, in pharmaceuticals, few successful cases have been brought forward under this piece of legislation because the definition of a ‘defect’ is problematic in the context of medicines, where adverse reactions will be expected to occur in a proportion of patients for almost any medicine. There have also been examples of no-fault compensation schemes, where the claimant is required only to show harm from the product, without needing to prove the manufacturer’s fault. See Pharmaceutical regulation in the UK | Ada Lovelace Institute.

[xxxii] In food safety, the food business owner or proprietor is legally accountable for ensuring their business complies with food safety laws. This includes implementing and enforcing proper food safety management systems and training employees. See Microsoft Word – FSA Guidance Notes for Regulation 178-2002 v2 23.7.07.doc.

[xxxiii] Airlines are subject to different kinds of transparency reporting. Incident Reporting is mandated by UK Regulation 376/2014, which requires the reporting of safety related occurrences involving UK airspace users. The UK Emissions Trading Scheme also requires most aircraft operators to submit an annual emissions report, monitoring CO2 emissions. See Occurrence reporting | UK Civil Aviation Authority and Participating in the UK ETS – GOV.UK.

[xxxiv] Banks, building societies, investment firms, credit unions and insurers need to provide regulatory returns to the Prudential Regulatory Authority. The Financial Conduct Authority also has mandatory reporting requirements for regulated firms, which include submitting annual financial crime reports, complaints data, consumer credit data, and detailed information on derivative trades and other activities. See FCA Disclosure Guidance and Transparency Rules sourcebook: Chapter 4 – Periodic Financial Reporting, and Regulatory reporting | Bank of England.

[xxxv] In pharmaceuticals, clinical trial transparency includes registering trials in public registries, publishing trial outcomes within mandated timelines, and providing comprehensive safety data like Suspected Unexpected Serious Adverse Reactions in annual reports. Relevant regulatory frameworks include the EU Clinical Trials Regulation and national guidelines from the UK’s Health Research Authority. See ClinicalTrials.gov, Clinical trials for medicines: manage your authorisation, report safety issues – GOV.UK, Clinical trials for medicines: collection, verification, & reporting of safety events – GOV.UK, and Clinical Trials of Investigational Medicinal Products (CTIMPs) – Health Research Authority.

[xxxvi] All businesses are legally required to report to the Food Standards Agency and Food Standards Scotland if they have reasons to believe that a food or feed product placed on the market is unsafe. The provisions of General Food Law (assimilated Regulation (EC) No 178/2002) requires businesses to report also if new evidence emerges on the safety of an authorised product. There are also traceability requirements, requiring food business operators to have systems in place to easily check back to see where a product came from and forward to check who has been supplied. See Reforms to the market authorisations process for regulated products | Food Standards Agency and Guidance on Food Traceability, Withdrawals and Recalls within the UK Food Industry.

[xxxvii] There are routes for redress if an airline or aircraft has caused an individual harm or travel disruption. Consumers can file complaints with the airline directly, seek alternative dispute resolution, refer the complaint to the Civil Aviation Authority, or take legal action. Consumer protections are offered by several regulations, including: Assimilated Regulation (EC) No 261/2004 of the European Parliament and of the Council of Europe 11 February 2004, which establishes common rules on compensation and assistance to passengers in the event of denied boarding and cancellation or long delay of flights (Applicable in the UK pursuant to s. 3 European Union (Withdrawal) Act 2018 and amended by The Air Passenger Rights and Air Travel Organisers’ Licensing (Amendment) (EU Exit) Regulations 2018); See How to make a complaint | UK Civil Aviation Authority, Consumer protection law | UK Civil Aviation Authority, Regulation (EC) No 261/2004, The Civil Aviation (Denied Boarding, Compensation and Assistance) Regulations 2005.

[xxxviii] Consumers can direct complaints about financial services and seek redress from the UK’s Financial Ombudsman Service. Additional protections are afforded by the Financial Conduct Authority’s Consumer Duty and the Financial Services Compensation Scheme. See Financial Ombudsman Service: our homepage, Consumer Duty | FCA, Financial Services and Markets Act 2000, and Financial Services Compensation Scheme | FSCS.

[xxxix] See endnote 31.

[xl] The Food Standards Agency offers a range of reporting mechanisms for consumers and businesses that have been harmed by food products, services, or businesses and are seeking redress. Consumers can report food safety/hygiene issues to their Local Authority. Consumers can also seek compensation for food poisoning, to cover financial losses like the cost of private treatment or any loss of earnings. Food poisoning compensation law is governed by the Consumer Protection Act 1987. See Report a food safety or hygiene issue | Food Standards Agency and Food poisoning compensation claims | Sheffield Solicitors | Graysons.

Related content