The 2025 European Commission EU digital omnibus package: The EU AI Act

The AI Omnibus Proposal introduces targeted amendments to Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence of 13 June 2024 (the AI Act). The Proposal does not alter the risk-based structure of the AI Act, the categorisation of AI systems, or the substantive obligations applicable to prohibited, high-risk, or limited-risk AI systems. Instead, it focuses on implementation and timing, and on making the governance arrangements work in practice. In substance, the amendments are confined to implementation sequencing and procedural adjustments, including transitional timelines, institutional coordination, and compliance-related procedural adjustments, intended to facilitate the application and enforcement of the AI Act as adopted, while also making limited, targeted amendments to (i) AI literacy (Article 4), (ii) special-category data processing for bias monitoring/detection (Article 10(5) and the insertion of a new Article 4a, with Article 4a intended to replace the existing Article 10(5) approach), and (iii) interactions with sectoral product conformity assessment regimes (including amendments to Articles 6(4), 28–30 and 43(3)).

Proposals most likely to be adopted and rationale

This Section addresses, in turn:

  1. Amendments to transitional and phasing-in provisions; 
  2. Targeted adjustments to governance and coordination mechanisms; and 
  3. Refinements to conformity assessment and documentation processes, without altering the substantive obligations applicable under the AI Act, including targeted streamlining for product-related high-risk AI systems and SME-facing documentation simplifications.

1. Amendments to transitional and phasing-in provisions

The AI Omnibus Proposal introduces targeted amendments to the transitional provisions in Article 113 of the AI Act which govern the timing and sequencing of the application of obligations under the AI Act.

In particular, the Proposal adjusts the application dates for selected obligations applicable to providers and deployers of AI systems, including requirements relating to risk management (Article 9), data and data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), and accuracy, robustness and cybersecurity (Article 15) and related high-risk system obligations referenced in Article 113 (including Articles 8 and 16). The Proposal does not make compliance conditional on the mere “availability” of standards. Instead, it introduces a sequencing mechanism tied to a Commission decision confirming that adequate measures exist to support compliance with the relevant Chapter III obligations; after that decision, Chapter III applies after a defined period (six months for Annex III high-risk systems and 12 months for certain Annex I product-related high-risk systems). If no such Commission decision is adopted by fixed longstop dates, Chapter III applies in any event from those dates (2 December 2027 for Annex III high-risk systems; 2 August 2028 for certain Annex I product-related high-risk systems). The transitional text also introduces a distinct compliance date for specified Article 50(2) transparency steps (2 February 2027). High-risk AI Systems used by public authorities, providers and deployers will still have until 2 August 2030 to comply with the Act’s requirements.

These amendments are likely to be adopted because they:

  • Do not amend the content or legal thresholds of the obligations set out in Articles 9 to 15;
  • Are confined to sequencing and timing, addressing implementation constraints identified in advance of application; and
  • Preserve the enforceability of the EU AI Act once the adjusted timelines take effect, while providing fixed longstop dates where standards/common specifications are delayed.

The EDPB/EDPS Joint Opinion 1/2026 accepts that sequencing changes may be justified to ensure workable implementation, but cautions against transitional drafting that makes the application of core high-risk requirements overly contingent on the timing or availability of harmonised standards/common specifications. In the EDPB/EDPS view, the Proposal should avoid creating extended periods in which high-risk AI systems are placed on the market or put into service without the effective application of the Title III safeguards, and should ensure that any transitional flexibilities remain narrowly framed, legally certain, and enforceable in practice. The Opinion also flags that “moveable” or decision-dependent timelines can undermine legal certainty and may affect the protection of fundamental rights. It also invites co-legislators to consider maintaining the current timeline for certain obligations (explicitly citing transparency as an example), even if other obligations are deferred.

2. Adjustments to governance and coordination mechanisms

The Proposal further introduces amendments affecting governance and coordination under Title VII of the AI Act, including provisions relating to cooperation between national competent authorities and Union-level bodies.

In particular, the Proposal clarifies aspects of:

  • Cooperation between market surveillance authorities designated pursuant to Article 70;
  • The exchange of information and coordination of supervisory activities in relation to high-risk AI systems; and
  • The operation of coordination mechanisms involving Union-level structures established under the EU AI Act.

The EDPB/EDPS Joint Opinion 1/2026 is supportive of EU-level regulatory sandboxes as an innovation tool (including for SMEs), but stresses that the role and competence of data protection authorities must be explicit where sandbox activity involves personal data processing. In particular, the Opinion recommends clarifying directly in the AI Act that competent data protection authorities’ should be associated with the operation of EU-level sandboxes and involved in supervision/enforcement of the corresponding processing, in line with Articles 55 et seq. GDPR. It also flags uncertainty as to how the competent data protection authority would be identified for EU-level sandboxes and how this interacts with the GDPR cooperation mechanism, and recommends that this interplay be clarified (rather than left to implementing acts alone). The Opinion further recommends granting the EDPB observer status at the AI Board to ensure continuous involvement where matters related to the application of data protection law (including EU-level sandboxes) are discussed.

In addition, the Proposal makes targeted adjustments to: (i) AI regulatory sandboxes and real-world testing (Articles 57–60, including the insertion of a new Article 60a), with a stated purpose of providing “a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market” , and (ii) the Commission’s ability to implement templates/criteria via implementing acts in specified coordination areas (including amendments to Article 70(8)).

These amendments are likely to be adopted because they:

  • Do not alter the allocation of supervisory competence established under the EU AI Act;
  • Are directed at improving consistency and coordination in supervision; and
  • Are framed as procedural clarifications rather than as changes to enforcement powers, noting that the sandbox/real-world-testing amendments are positioned as implementation enablement rather than a relaxation of Title III requirements.

3. Refinements to conformity assessments and documentation processes

The Proposal introduces targeted refinements affecting conformity assessments and documentation obligations applicable to high-risk AI systems under Articles 43 to 49 of the EU AI Act.

These refinements include adjustments intended to streamline interactions with notified bodies and to clarify the documentation requirements that apply prior to placing high-risk AI systems on the market or putting them into service, without modifying:

  • The categories of systems subject to conformity assessment under Article 43;
  • The applicable conformity-assessment procedures; or
  • The substantive requirements set out in Annex IV of the AI Act.

More specifically, the Proposal: (i) clarifies the trigger affecting which product-related AI systems are to be treated as high-risk (Article 6(4)); (ii) streamlines the obligations governing interaction between the provider and the notified body and “single assessment” mechanics in regulated product contexts (including amendments to Articles 28–30 and Article 43(3)); and (iii) introduces documentation simplification levers for SMEs (including targeted amendments to Article 11).

This element of the Proposal is likely to be adopted because it:

  • Preserves the existing conformity-assessment architecture;
  • Addresses practical issues relating to documentation and process sequencing; and
  • Is limited to technical clarification rather than substantive reform, particularly for AI systems embedded in products already subject to third-party assessment under the EU harmonisation legislation.

Taken together, these measures are likely to be adopted because they do not reopen politically settled questions addressed during the adoption of the AI Act, do not affect the scope of Articles 5 or 6 or Annex III, and do not alter the level of protection applicable to high-risk AI systems. Their legal effect is confined to enabling implementation within existing institutional and conformity-assessment capacity, an approach that has historically attracted limited resistance in trilogue where substantive safeguards remain unchanged.

The EDPB/EDPS Joint Opinion raises specific accountability concerns in relation to registration/documentation modifications. In particular, the EDPB/EDPS emphasise that registration of high-risk AI systems serves not only transparency, but also early visibility for national competent authorities and for public authorities/bodies supervising obligations under Union fundamental-rights law (FRABs), enabling timely scrutiny and, where appropriate, enforcement engagement before systems are placed on the market or put into service. The Opinion cautions that relying solely on provider documentation available on request is not an adequate substitute for registration where the AI Act otherwise allows a provider to self-assess that an Annex III system is not high-risk, especially given the existing divergence in interpretation and the risk of incorrect assessment.

The Opinion goes further: it recommends maintaining the registration obligation precisely because it (i) informs the public and supports deployer due diligence/risk-management, and (ii) informs competent authorities/FRABs before placing on the market/putting into service, enabling timely scrutiny and mitigation. It states that the exemption under Articles 6(3)–6(4) must remain counter-balanced by appropriate accountability, and that removing registration is not justified by the negligible administrative savings.

Proposals more likely to be challenged, or rejected and rationale

This Section addresses, in turn:

  1. Amendments affecting the scope and operation of obligations applicable to high-risk AI systems and AI literacy obligations; 
  2. The introduction of a statutory route for the processing of special categories of personal data for bias detection and mitigation; and 
  3. Issues of legal certainty and enforceability arising from the drafting and sequencing of the Proposal.

1. Amendments affecting the scope and operation of high-risk AI obligations

Elements of the Proposal that affect the operation of obligations applicable to high-risk AI systems under Title III of the AI Act are more likely to attract scrutiny during the legislative process.

In particular, where the Proposal adjusts the application of the requirements under Articles 9 to 15 of the AI Act (risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy, robustness and cybersecurity), concerns may arise as to whether such adjustments could be interpreted as lowering the level of protection or oversight applicable to high-risk AI systems.

This aspect of the Proposal is more likely to be challenged where drafting could be read as:

  • Narrowing the circumstances in which an AI system falls within the definition of a high-risk AI system under Article 6 of the AI Act, read together with Annex III; or
  • Reducing the intensity or evidential thresholds of conformity assessment procedures under Articles 43 to 49 of the AI Act.

The Proposal’s targeted clarification of the “product-interface” trigger for high-risk qualification (Article 6(4)) may be tested politically, given its practical effect on which Annex I product-related AI systems are brought within the Title III high-risk requirements.

As a result, amendments in this area are likely to be narrowed or accompanied by clarifications to confirm that any changes are confined to sequencing or process and do not alter the substantive obligations imposed on providers or deployers of high-risk AI systems.

Negotiation pressure is likely to focus not on the legitimacy of sequencing adjustments as such, but on ensuring that revised timelines and procedural flexibilities cannot be relied upon to defer or dilute compliance with Articles 9 to 15 of the AI Act once the amended application dates have passed. This concern is likely to be addressed through recital-level clarification rather than substantive amendment of the operative provisions.

The Proposal also removes the directly binding obligation on providers and deployers to ensure a sufficient level of AI literacy, replacing it with a duty on the Commission and Member States to encourage such measures through non-binding initiatives. This shift is likely to be contested on the basis that it weakens a horizontal compliance obligation originally designed to operate as a preventive safeguard. On the other hand, the AI literacy requirement has been criticised for lacking clarity as to how it should be implemented in practice, and its removal will silence such doubts. 

The EDPB/EDPS Joint Opinion expresses concern at the downgrading of the AI literacy obligation, treating AI literacy as a horizontal, preventive safeguard that supports effective compliance and risk management. The EDPB/EDPS indicate that replacing a directly applicable duty with non-binding encouragement risks undermining consistent implementation across Member States and weakens a measure that was designed to operate ex ante, rather than only through ex post enforcement.

The Opinion is more direct: it states that transforming the current obligation into encouragement would significantly soften the obligation and ultimately undermine its very objective, and suggests that guidance on implementation would be preferable to removing the duty. Alternatively, if a duty to encourage is retained, it should apply in parallel with (not replace) the current Article 4 obligation.

The Opinion recommends maintaining the obligation to register in the EU database where a provider concludes that a system listed in Annex III is not high-risk. They caution that removing that registration step would materially reduce accountability and transparency, and could create incentives to over-use non-high-risk conclusions in borderline cases. This aligns with the Opinion’s recommendation to maintain registration as registration enables early visibility for competent authorities and FRABs before placing on the market or putting into service, and is not adequately replaced by documentation available on request.

2. Processing of special categories of personal data for bias detection and mitigation

The Proposal introduces a new provision in the AI Act that will permit, in narrowly defined circumstances, the processing of special categories of personal data for the purposes of bias detection and correction in relation to high-risk AI systems.

This amendment, reflected in the insertion of a new Article 4a of the AI Act, which operates as a lex specialis alongside Article 9 GDPR for the limited purposes specified, is likely to be closely scrutinised because it creates an explicit statutory route for processing data falling within the greater protections set out in Article 9(1) GDPR in the context of AI compliance. The Proposal also replaces the existing Article 10(5) approach by moving the legal basis and conditions into Article 4a.

Although the Proposal conditions this processing on meeting strict safeguards, including assessments of necessity and proportionality, security measures, access controls, and deletion requirements, legislators may seek to narrow:

  • The categories of AI systems to which the provision applies;
  • The circumstances in which such processing may be regarded as “necessary”; and
  • The interaction between Article 4a of the AI Act and existing obligations under the GDPR and Regulation (EU) 2018/1725 .

The EDPB/EDPS Joint Opinion supports the objective of enabling effective bias detection and correction in principle, but recommends materially tighter drafting to prevent function creep and to ensure the derogation remains exceptional. The EDPB/EDPS call in particular for: (i) clear circumscription so that the legal route cannot be relied upon outside the high-risk context, and is limited to cases where the risk of adverse effects caused by bias is sufficiently serious; and (ii) preservation of a “strict necessity” standard (rather than a diluted necessity threshold), coupled with robust safeguards for rights and freedoms.

The Opinion is explicit that the Proposal would extend the material and personal scope of the current Article 10(5) (high-risk context) to all AI systems and models and also cover deployers. It warns that any ability to rely on this legal ground in non-high-risk contexts should  be clearly circumscribed and limited to cases where the risk of adverse effects caused by bias is sufficiently serious to justify processing special categories. It also flags that the current” wording of  “strictly necessary” is weakened in the Proposal (new Article 4a(1) referring only to “necessary, and recommends keeping a strict necessity standard. The Opinion also recommends improving legal certainty in the operative wording (including because the drafting is likely to create uncertainty as regards Articles 6 and 9 GDPR), and suggests adding a recital with concrete examples to justify any extension beyond high-risk. Finally, it stresses that data protection Authorities remain competent to supervise processing pursuant to Article 4a, consistent with Article 2(7) of the AI Act.

The same political sensitivity is likely to affect the paired amendment of Article 10(5), which changes how (and on what conditions) special-category data may be processed in the context of bias monitoring and detection (including, as framed in the Proposal materials, tighter conditions around testing datasets and access/retention constraints).

This element of the Proposal is therefore more likely to be narrowed through tighter drafting of necessity and proportionality conditions, or through express cross-reference to safeguards equivalent to those required under Article 9(2) and Article 89(1) GDPR, in order to mitigate the risk to fundamental rights under Articles 7 and 8 of the Charter of Fundamental Rights of the European Union .

3. Issues of legal certainty and enforceability arising from drafting and sequencing

The drafting of the AI Act reforms under the Proposal gives rise to issues of legal certainty where it adjusts implementation and sequencing provisions without fully specifying the legal consequences during transitional periods.

In particular, uncertainty may arise as to:

  • The enforceability of obligations subject to revised application dates under Articles 113 to 116 of the AI Act;
  • The treatment of AI systems placed on the market or put into service during periods in which amended timelines apply; and
  • The interaction between revised sequencing provisions and enforcement powers under Articles 99 to 101 of the AI Act.

In practice, the key friction affecting legal certainty is the Proposal’s greater reliance on Commission decisions and subsequent fixed “application-after” periods, combined with supporting implementation measures (including harmonised standards/common specifications), which may create disputes as to (i) what must be in place to justify a Commission decision, (ii) the legal effect of partial coverage by standards/common specifications, and (iii) evidencing compliance during the interim.

The EDPB/EDPS Opinion accepts that some delay drivers may be objectively grounded (including standards and supervisory capacity), but cautions that postponement and conditional commencement can undermine legal certainty and materially affect the protection of fundamental-rights in a fast-evolving AI landscape. The EDPB/EDPS invite co-legislators to consider maintaining the existing timeline for certain obligations, explicitly citing transparency as an example, even if other elements are deferred. They also flag the cumulative effect of postponement when combined with any extension of the “grandfathering” treatment for legacy systems, which could further delay the point at which Title III controls apply in practice.

Given that the AI Act establishes directly applicable obligations and sanctions, these issues are material. This aspect of the Proposal is therefore more likely to be refined through tighter drafting or additional recitals clarifying the legal effect of the amended provisions and their interaction with enforcement mechanisms.

Absent clarification, these uncertainties may affect the uniform application of administrative fines and corrective measures under Articles 99 to 101 of the AI Act, particularly in cross-border cases where the temporal applicability of obligations is contested. For that reason, this aspect of the Proposal is likely to be refined through additional recitals or narrowly framed transitional provisions clarifying the enforceability of obligations during adjusted implementation periods.

The Joint Opinion EDPB/EDPS also raises concrete supervision/enforcement legal-certainty issues. It recommends (i) clearer delimitation of the types of general-purpose AI models that trigger the AI Office’s “exclusive competence”, and (ii) express operative-text clarification that the AI Office is not competent to supervise AI systems developed or used by Union institutions, bodies, offices or agencies that fall under EDPS supervision (noting that recital-only clarification is insufficient for legal certainty). It further recommends clarifying that the Proposal does not affect the independence and existing powers of data protection authorities (including powers to obtain all information necessary to monitor compliance with data protection law), and that the role of market surveillance authorities in handling FRAB requests should be strictly administrative (execution/transmission) and not extend to assessing the necessity/proportionality of the request.

The EDPB/EDPS repeatedly emphasise that simplification measures must be framed so that they do not create accountability gaps during transitional periods, and that the enforceability of Title III safeguards should not be weakened by drafting that leaves material uncertainty as to when obligations bite and how compliance is evidenced in practice.