EU Digital Omnibus Rollbacks Threaten to Strip Transparency Safeguards from High-Risk AI Systems
Amnesty International warns EU's November 2025 Digital Omnibus proposals would eliminate mandatory risk assessments and weaken AI Act protections for citizens.
What Happened
On 2 April 2026, human rights organization Amnesty International raised significant concerns about the European Commission’s Digital Omnibus proposals, which would fundamentally alter how the AI Act handles transparency and risk accountability. The proposals, originally tabled in November 2025, would eliminate requirements for companies to publish assessments determining whether their AI systems qualify as “high-risk” under EU rules.
This represents a critical shift from the AI Act’s current framework, which mandates that developers of high-risk systems—those that could pose risks to health, safety, or fundamental rights—conduct and document rigorous impact assessments. Under the proposed changes, these assessments would no longer need to be made publicly available, effectively removing a key mechanism for independent scrutiny.
Why It Matters
The Digital Omnibus package emerged from negotiations between the EU Council (which approved its mandate in mid-March 2026) and the European Commission, ostensibly to streamline implementation timelines. The proposed adjustments would push application deadlines for high-risk AI rules to 2 December 2027 for stand-alone systems and 2 August 2028 for product-embedded systems—delays of up to 16 months from originally scheduled dates.
But Amnesty’s analysis suggests these delays come at a steep cost. By eliminating mandatory transparency around risk determinations, the proposals would:
- Concentrate power: Companies would become judge, jury, and executioner in determining whether their systems pose genuine risks
- Obstruct accountability: Citizens and civil society organizations would lose visibility into how AI systems are classified and what safeguards apply
- Weaken enforcement: Regulators and advocacy groups would have fewer tools to challenge inadequate corporate risk assessments
The timing is particularly significant for Ireland, where the AI Office must become operational by 1 August 2026 to meet EU AI Act deadlines. A weakened transparency framework would directly undermine Ireland’s ability to effectively enforce and implement the Act at national level.
What This Means for Builders and Users
For AI developers operating in Europe, these changes would reduce compliance friction in the short term—no public disclosure of risk assessments could mean less reputational exposure and fewer stakeholder questions. However, this comes with regulatory uncertainty: a Digital Omnibus that’s perceived as gutting protections could trigger political backlash, stricter national-level rules from member states, or international pressure for stronger global standards.
For end-users and civil society, the practical implication is clear: less transparency, harder accountability. High-risk AI systems—including those used in employment screening, credit scoring, or law enforcement—would operate with diminished public oversight.
What’s Still Unclear
The Digital Omnibus negotiations are ongoing between the Council and Parliament, and the final form of transparency requirements remains subject to amendment. Key questions remain:
- Will Parliament resist Amnesty’s concerns and push back against weakened disclosure requirements?
- How will member states like Ireland implement their own frameworks if transparency obligations are diluted at EU level?
- Will civil society organizations successfully mobilize political pressure to preserve the original transparency mechanism?
The outcome will significantly shape how European AI governance evolves through 2026 and beyond.
Source: Amnesty International