The Carve-Out That Changes Everything

In the final hours of AI Act trilogue negotiations on May 7, 2026, the European Parliament secured what amounts to the first major exemption from the EU’s flagship AI legislation: artificial intelligence systems embedded as safety components in regulated products—machinery, toys, medical devices, and connected devices—will now face significantly lighter compliance scrutiny than standalone AI systems.

This wasn’t a technical clarification. It was a deliberate regulatory choice that exposes a fundamental tension in Europe’s AI governance approach: the question of whether to create separate regulatory tracks for AI based on what it’s embedded in, rather than what it does.

What Actually Changed

The Parliament’s negotiating position centered on a straightforward argument: if an AI system is already embedded within a product that’s subject to CE marking, product safety directives, or medical device regulations, subjecting that same AI to additional high-risk AI Act requirements creates redundant compliance overhead without proportional safety gains.

This exemption applies specifically to AI that functions as a component of regulated products rather than as a standalone offering. An AI system controlling machinery safety, flagging defective medical devices, or making toy recommendations would fall under their existing regulatory frameworks—not the AI Act’s high-risk category.

The practical effect: enterprises embedding AI into machinery, medical devices, or connected products will have until August 2, 2028 to achieve full compliance, while high-risk AI systems in law enforcement, education, and biometrics face a December 2, 2027 deadline—a 9-month window that matters enormously for implementation planning.

Why Irish Manufacturers Should Pay Attention

Ireland’s industrial sector—particularly in medical devices, pharmaceutical manufacturing, and precision engineering—is heavily invested in product compliance regimes already governed by CE marking and product safety standards. The regulated product exemption gives Irish firms an implementation advantage, provided they can demonstrate their AI systems genuinely function as safety components rather than primary decision-making systems.

However, the exemption creates ambiguity. The boundary between “AI embedded in a regulated product” and “AI that merely happens to be installed on regulated machinery” isn’t crisp. An AI system that recommends maintenance schedules versus one that automatically triggers maintenance decisions could fall on opposite sides of this line.

The Enforcement Gap Nobody’s Discussing

This carve-out reveals Europe’s first major governance assumption: that existing product regulations are sufficient to govern AI risk. That assumption may not hold. Medical device regulations were written long before large language models existed. Machinery directives predate transformer architectures. Embedding AI into products regulated under 1990s-era frameworks doesn’t automatically mean those frameworks address 2026 AI-specific harms.

The Irish AI Office, operational by August 1, 2026, will inherit responsibility for policing whether enterprises are genuinely using the regulated product exemption or exploiting it as a compliance shortcut.

Open Questions

How will national authorities determine whether AI in a regulated product deserves the exemption? What happens if an AI system that was compliant under product regulations begins generating novel risks—like bias in medical diagnostic recommendations—after deployment? Will the August 2028 deadline apply equally to high-risk AI in regulated products, or will national divergence create enforcement fragmentation across member states?

Europe has chosen to trust existing product frameworks to govern AI. Ireland will test whether that trust was justified.


Source: EU Council AI Act Amendment Agreement