Key Developments

The EU Council has agreed to postpone the establishment of AI regulatory sandboxes by national authorities until December 2, 2027, as part of the Digital Omnibus Package negotiations. The decision also clarifies the AI Office’s supervisory powers for general-purpose AI models and introduces new Commission obligations to provide compliance guidance for high-risk AI systems.

Simultaneously, the European Commission is actively recruiting AI technology specialists as contract agents to govern cutting-edge AI models, with applications closing March 27, 2026. The seventh AI Board meeting on March 20 continued high-level coordination on implementation strategy.

Industry Context

These developments reflect the EU’s pragmatic approach to balancing regulatory oversight with practical implementation realities. The revised timeline for high-risk AI systems could extend compliance deadlines by up to 16 months, with final deadlines set for December 2027 (Annex III systems) and August 2028 (EU harmonisation legislation systems).

The extension particularly benefits SMEs and newly included small mid-caps (SMCs), providing additional time to develop compliance frameworks while standards and support tools are finalised.

Practical Implications

For Irish and European AI developers, these changes offer crucial breathing room. The delayed sandbox framework means national authorities have more time to establish proper testing environments, while the extended compliance deadlines allow companies to align their development cycles with emerging standards.

The Commission’s recruitment drive signals serious enforcement intentions - companies should expect sophisticated technical oversight once implementation begins. The upcoming Code of Practice on AI-generated content marking, due Q2 2026, will provide voluntary guidance for transparency compliance.

Open Questions

Critical uncertainties remain around the exact standards that will define compliance requirements. While the Commission can accelerate deadlines if support tools become available early, the criteria for this determination aren’t fully specified.

The scope of AI Office supervision over general-purpose models needs further clarification, particularly regarding cross-border enforcement coordination and the practical mechanics of technical assessments for foundation models.


Source: artificialintelligenceact.eu