Skip to content
Blog

Keep it simple? How ‘simplifying’ AI and data rules for big tech leaves people paying the cheque

If materialised, the leaked EU Digital Omnibus proposals will represent the biggest retrenchment of fundamental rights in decades

Valentina Pavel , Julia Smakman

17 November 2025

Reading time: 17 minutes

Credit: Massimo Parisi

Last week’s unofficial release of the Data and AI Omnibus – one of six fast-track programmes of regulatory simplification produced by the Commission in various domains – offers a sharp picture of the European Commission’s intentions for the future of people’s digital rights and fundamental protections.  

The Digital Omnibus covering data and AI is expected to include changes to the GDPR, the ePrivacy Directive and the AI Act. These will be hugely consequential for the protection of people’s rights and societal resilience, and for the European Union’s position on global digital policy. The Omnibus as leaked would reverse the EU’s leadership role and render ‘adequacy’ decisions (where other countries’ level of data protection is evaluated against EU’s standards) virtually a formality. 

What is described as a ‘simplification’ package to technically streamline digital regulation will in fact fracture people’s rights and freedoms, and leave the door open for AI development with significantly lower safeguards. In spite of cautions that a general re-opening of the GDPR would be undesirable, the core modifications included in the leaked Omnibus text includes several core modifications. This calls into question the legitimacy of claims that the proposals are streamlining measures when in fact they are closer to a full re-opening.  

The data and AI omnibuses will be officially published on 19 November and sent to the European Parliament and the Council for review. The available text is only a leak but, even if some proposals are dropped, the fact that the Commission is considering such profound changes to the EU digital rulebook and the scale of the possible modifications must be recorded and reflected on.  

Besides questions surrounding who will actually benefit from the proposed simplification, what, in concrete terms, might be set to change?  

Below we analyse some of the most significant changes and their real-world impact. While the discourse informing the EU tech policy space tends to split data and AI as two separate subjects, we analyse the two proposals together, because the most consequential changes for AI development will be effects of the changes proposed in the Data Omnibus. 

Changes to the GDPR

1. Incentives for a ‘race-to-the-bottom’ compliance behaviour  

One of the most significant changes under consideration targets the definition of personal data itself.  

Right now, personal data is defined as the information that relates to an identified or identifiable person. If you can trace data back to an individual, or if you can link seemingly unrelated data to identify a person, this is personal data and is protected under the GDPR. The definition is intentionally broad as the nature of threats to identification evolve over time. For example, the Court of Justice of the European Union has ruled that IP addresses can be used to identify an individual and therefore deserve protection.  

If the proposed change goes through, GDPR protections will only apply if the ‘data controller’ – the entity that ‘determines the purposes and means of processing’, deciding how data is collected, used and protected – itself has the means to identify individuals from the data it holds. In other words, if the particular controller cannot identify a person, then it can treat the data as anonymous. This means that the controller can share the data with others who may have the ‘means reasonably likely’ to identify people from that dataset, leaving individuals exposed to potential abuse further down the data processing chain. 

The change, if accepted, is likely to significantly reduce incentives towards compliance and drive privacy and security risks for those living in the EU.   

The proposal ignores basic facts about the nature of data: that is global and dynamic and circulates through a chain of processing entities. As a result, the change will likely encourage a race to the bottom in relation to anonymisation. If controllers are not required to invest in stress-testing anonymisation techniques, the market will have no incentives to set a reasonably high bar for pre-empting de-anonymisation, and companies will profit from this relaxation of data protection rules, leaving people exposed to harm. 

The proposal adds further uncertainty with market consequences also in another form. The leaked text doesn’t state whether and how data controllers will document that they cannot identify people through the data they hold and record their assessment. With a separate ‘simplification package’ proposing to remove record-keeping obligations for organisations with fewer than 750 employees, how will regulators be able to investigate potential abuses? What mechanisms will be in place for aligning market practices on identification to maintain an appropriate standard of protection?

Example: Legitimising non-consensual mass data brokerage
A social media company compiles a pseudonimysed dataset that shows all posts that a particular group of people has liked, and then sells it to a data broker  an entity that buys and aggregates user data from thousands of sources (apps, websites, credit card companies, etc.).  

 

If the data broker does not have a ‘means reasonably likely’ to re-identify the people in the dataset, then the data will not be personal data for the broker, who will not have to comply with the GDPR. It will be able to sell the data to others or publish it. It will not have to worry whether those accessing the data will have the means to re-identify people in the dataset or if sharing the data will lead to risks and harms.

2. Lifting protections for inferred sensitive data  

Another consequential proposal targets Article 9 of the GDPR, which protects special category data. Right now, all sensitive data that may directly or indirectly reveal information such as sexual orientation, religious beliefs or political opinions is classified as special category data. It is generally prohibited from processing, unless certain conditions are met. If the proposed change goes through, only the special category data that is directly revealed by the data subject will be protected.  

This change not only disregards European case law and is inconsistent with Convention 108+, but also supports much of the problematic online activity that leads to real world harm 

Sensitive data is often not directly revealed, especially in online spaces. There will be few occasions where data directly reports that ‘John has cancer’, or that ‘Emma is gay’. Most of the time, information about people’s sensitive characteristics is derived from correlations between different data points. For example, shopping preferences might indicate that someone is pregnant, and location data can reveal whether a person is at a gay bar or at a hospital. Browsing particular web pages might indicate someone’s political party affiliation, even if they have not publicly declared it. 

Personalised content and political advertising are based on inferred, rather than self-declared, data. The proposed change will enable profiling, behavioural targeting and predictive analytics through increased ‘comparison, cross-referencing or deduction’ of sensitive information not directly provided by people. 

Legitimising a wide discretion for profiling presents risks to individuals and enables political targeting that have the potential to threaten democratic norms throughout Europe. 

Example: Permission to process inferred sensitive data
Thirty-year-old Cindy is feeling overwhelmed and experiencing financial struggles. She searches for terms like ‘side hustles for extra income’, ‘student loan relief’, ‘symptoms of burnout’, and ‘how to deal with financial stress’ (described as ‘search data’). She frequently reads articles on financial advice, visits forums where people discuss job insecurity and has spent time on mental wellness sites reading about anxiety management techniques (described as ‘website data’). She has joined a Facebook group called ‘Navigating Debt in Your 30s’ (described as ‘social media data’).

 

A data broker collects all these signals. It correlates data points to build profiles which it sells for profit. From Cindy’s online activity, the broker labels her as high debt and suffering from job insecurity, and that she has low confidence and is very anxious. The broker infers that she is in a financially unstable and vulnerable mental state. 

 

In the runner-up to presidential elections, a party is looking for ways to convince voters about their political agenda. Their internal research suggests that voters who feel anxious about the future and economically insecure are highly susceptible to messages of radical change and blame on the current regime. The data broker pulls up Cindy’s profile as ‘Anxious and Persuadable’. 

 

She is targeted with an emotionally charged advertising campaign, playing on her vulnerable mental state with fear-based messages and validating her feelings of hopelessness. The ads blame the ‘established political’ system and present their candidate as the only solution. Cindy is mobilised into voting for this candidate through the leveraging of inferred financial and  psychological vulnerability to steer her behaviour.  

3. Legalising today’s unlawful AI training practices  

The digital omnibus also introduces modifications that will make it easier to process personal data for AI use and development. This includes a new GDPR Article, 88c, which states that processing personal data to develop and operate an AI system or model can be in the ‘legitimate interest’ of an organisation.  

Organisations will be able to rely on legitimate interests as a lawful ground to process personal data, with some safeguards, such as the legitimate interest test, data minimisation, transparency obligations and the right to object. This means that where an organisation thinks its interests to process personal data for AI training do not interfere with people’s rights, interests or freedoms, it can go ahead with AI training and development. 

Notably, the formulation of ‘legitimate interest’ here is broad and includes the ‘operation’ of AI systems without giving a definition of what ‘operation’ means in this context. As a result, it is unclear what – if any – personal data processing in an AI context would not be allowed. 

Further proposed changes enable organisations to process special category data for AI training, testing and validation, where removing it from the already trained model would require a ‘disproportionate effort’. Under this change, companies can defend a decision to scrape sensitive data from the web into an AI model on technical and financial grounds. 

It will be at the discretion of AI developers to set the limit for what is responsible processing and ‘disproportionate effort’, turning an important legal protection for special category data into a convenience exercise – ‘what level of effort am I as a developer ready to technically or financially invest?’ – and leaving sensitive personal data processing at industry’s discretion.  

This change flips the logic of the proportionality test, used across the GDPR, upside down: instead of the rights of the individual being the measure to establish when data processing is proportionate, the proportionality of data processing to protect those rights  is based on an AI company’s self-assessed burden.  

ExampleRemoving privacy safeguards from AI training
The information that a user – ‘Jane’ – shares with their chatbot is re-used to train an AI model even though she has not consented to this.    

 

While the proposed Article 88c mentions that data subjects have an ‘unconditional right to object’ to the collection of their personal data, data scraping is a messy process. It is unlikely that AI companies will have the contact details of every person whose personal data ends up in a dataset. ‘Jane’ might not even know her data is part of the dataset, and will not be able to use her right to object. 

 

The AI company will assess whether to remove sensitive data from a training dataset based on how much effort that will take. So, even if ‘Jane’ does know that her data is in the scraped dataset and includes information about her health or religion, and she objects to its use, the company will still be able to use this data if it would ‘require re-engineering the AI system or AI model’. The condition for using Jane’s data is to ‘effectively protect’ her sensitive information from being used to infer outputs, from being disclosed or made available to third parties. Lines seem to be drawn by the organisation’s own assessments and technical capabilities, not people’s rights and freedoms. 

4. Legitimising automated decision making without consent and irrespective of public interest 

A fourth proposed change targets the regulation of automated decision making (ADM). Right now, most ADM is prohibited with some exceptions. The proposed change to Article 22 would frame it in a language that signals it is permissible.  

More specifically, in the case of contractual relations (such as when you register on a social media platform or in employment), existing data protection guidelines state that it is legal to use an ADM system to enter a contract only if such use meets the principle of ‘necessity’ under European law. 

With the proposed change, if a data controller decides that ADM is ‘necessary’, they will simply be able to use it, regardless of whether the decision could have been taken through other less-intrusive means. The choice will be at their discretion. 

In practice, this will lead to increased use of ADM.

Examplelegitimising non-consensual ADM in the context of employment contracts
A supermarket chain operates an automated system to allocate shifts to warehouse workers and set variable levels of pay. The system is based on historical availability and productivity data about the individual workers, even though the data does not account for the reasons of particular absences or dips in productivity.

 

The supermarket chain will be able to say that ADM for work and pay allocation is necessary if workers want temporary, seasonal or hourly based employment, which may lead to unfair practices, potential discrimination and little option for appeal. Indeed, the ADM use may result in workers being given few or no shifts, or lead to contract termination. 

 

Moreover, nothing will bar the company from using the system to also make inferences on the potential performance of prospective employees, using CV-extracted information about their education level and linking applicants to risk profiles based on data about existing workers.  

 

Under the proposed changes, it will be at the discretion of the supermarket to say that creating risk profiles for pre-hiring processes is necessary for entering an employment relationship. The change will accrue power to the data controllers in charge of a contract, as the use of ADM systems to process and decide upon a subject’s data will be at their discretion, even if less intrusive and not fully automated means are available. 

Changes to the ePrivacy Directive: rules on terminal equipment moved to the GDPR, leaving personal data less protected than non-personal data

The Omnibus proposes to move the legal regime for processing personal data on terminal devices from the ePrivacy Directive to the GDPR. A new exception, added to Article 6 of the GDPR, will state the specific purposes for processing personal data on terminal equipment which do not require a lawful ground.  

The four pre-defined purposes are: 

(a) carrying out the transmission of an electronic communication over an electronic communications network;

(b) providing a service explicitly requested by the data subject;

(c)  creating aggregated information about the usage of an online service to measure the audience of such a service, where it is carried out by the controller of that online service solely for its own use;

(d) maintaining or restoring the security of a controller’s service requested by the data subject or the terminal equipment used for the provision of this service. 

While most of the purposes are more technical in nature, the ability to create ‘aggregated information about audience measurement’ for the service’s own use raises questions. When a company is as big as Google or Meta, they can interpret ad measurement loosely and directly profit from it. Legally speaking, this change sets a dangerous precedent by overstepping the very foundations on which the GDPR is built: that for any processing of personal data there must be a legal basis.  

A positive aspect of the suggested change is that for purposes outside the four specifically stated (including ad measurement), individuals may be able to accept or reject a request for consent and to exercise the right to object to direct marketing through ‘automated and machine-readable means’. Ironically, the change also means that non-personal data will be much better protected under the strict regime of the ePrivacy Directive. 

Example: enabling comprehensive behavioural surveillance and monetisation
Website claims extensive user traffic – recording every click, scroll, hover and time spent – falls under ‘audience measurement’ for their own purposes. They interpret this exception broadly to include building behavioural profiles, A/B testing and measuring emotional responses to ads through engagement patterns without consent or any assessment.

 

The collected data reveals users’ vulnerabilities, effectively turning the ad measurement exception into a back door for comprehensive behavioural surveillance and monetisation by large tech companies.

 

Taken together with the changes to inferred sensitive information (which propose to protect information only ‘directly revealed’ by individuals, not information that is deduced or correlated), the risks to personal data protection become substantially higher.

Changes to the AI Act: driving the concealment of risk in the AI value chains

The leaked Omnibus proposes important changes to the AI Act – an already hard-won compromise – before much of it has even come into force. Right now, Article 49 of the Act requires providers of AI systems that fall into the high-risk category – such as those used in hiring processes and to calculate insurances – to register their system in an EU-wide database.

The proposed changes would lessen the requirement. AI providers that assess their own system as not being ‘high risk’ will no longer need to register their product, reducing the visibility of its application and eluding transparency measures. 

The draft AI Omnibus also proposes that the exception made for micro enterprises, allowing them to use a simplified Quality Management System (QMS), is extended to all SMEs, including start-ups 

In the context of AI, however, start-ups can quicky reach large scales of operation in terms of budget and employees, making the change significant. A less rigorous QMS could lead to key elements about the safety of an AI system developed by a successful start-up being missed, and to poorer documentation that makes it harder to audit or trace an incident. 

Public evidence for the streamlining of digital regulation

The need to streamline data and AI regulation, offering clear and prompt guidance for straightforward applications, cannot come at the expense of people and society. It is imperative that the simplification process is evidence-based. Without a clear picture of where legal compliance falls short and where it becomes too onerous and for whom, there cannot be a good faith discussion about reducing the burdens associated with it. 

In this, the proposed measures do not appear to account for the level of maturity of the different regulations and the role they have in futureproofing risk and harm. The GDPR rulebook is only just starting to mature after seven years of implementation – a short time in the lifespan of any regulation. Where enforcement is known to be limited and under-resourced, it is difficult to properly assess the impact of the law, and removing elements of accountability, transparency and rights’ protection from its architecture is premature.  

Businesses have stated the significant investment made over the last few years in skills and technical measures to ensure smooth compliance with the GDPR. This would be disrupted by changes to the regime. These potential losses, in addition to the burden of continued compliance, should be estimated and published by the Commission, as it seeks to balance trade-offs within its simplification agenda. 

With respect to other types of regulation such as the AI Act, which is only just being implemented, there is little data with which to judge the implications of the Omnibus proposals. It is difficult to see what the fundamental case is for proposing rule changes and how this can be evidenced. 

Conclusion

Together with other 126 organisations, the Ada Lovelace Institute has urged the European Commission to retract the proposals to modify the GDPR, ePrivacy and AI Act on the grounds of scale and lack of evidence.  

The scale of the proposed changes – some  of which are in direct conflict with the Charter of Fundamental Rights and Convention 108 of the Council of Europe – are not appropriate for legislative amendment under the fast-track omnibus procedure. Because they modify core parts of digital regulation, there is a requirement for robust evidence and the publication of a full fundamental rights impact assessment to evaluate the removal of safeguards as a whole instead of one by one.  

When much of the existing digital regulation has not been fully implemented, let alone properly matured and enforced, there cannot be an evidence-based case for a rapid deletion of essential requirements, obligations, rights and protections.  

 

 

 

Related content

Law & Policy

Interrogating how existing and emerging AI and data law, regulation, governance and policy meets the needs of people and society.

A satellite image of Europe at night, with cities and major urban centres clearly visible.

Ada in Europe

We examine how existing and emerging regulation in the EU strengthens, supports or challenges interests of people and society – locally and globally