US Federal AI Regulation Lands Amid European Implementation Sprint

The March 19, 2026 enforcement date for the US RAISE Act marks a critical inflection point for European AI builders: they’ll now face simultaneous compliance obligations on both sides of the Atlantic, with overlapping but distinct regulatory frameworks.

On March 20, 2026—just one day after RAISE takes effect—the Trump Administration released its National Policy Framework for Artificial Intelligence as a legislative recommendation to Congress. This timing underscores the urgency of federal AI governance in the US, but creates immediate complexity for companies operating across transatlantic markets.

What RAISE Requires

The RAISE Act imposes four core obligations on developers of “frontier” AI models:

  • Transparency requirements: Disclosure of model capabilities, limitations, and intended uses
  • Compliance protocols: Documentation of safety testing and risk mitigation measures
  • Safety reporting: Incident and vulnerability disclosure mechanisms
  • Registry and monitoring: Enrollment in federal tracking systems

These requirements apply specifically to “frontier” models—broadly defined as systems representing significant capability leaps—giving regulators considerable discretion in determining scope.

The Dual-Compliance Crunch for Irish and European Companies

For AI developers based in Ireland or operating under the EU AI Act framework, this creates a genuine coordination challenge:

Timeline collision: The EU AI Act’s core obligations take effect August 2, 2026—just 4.5 months after RAISE. Watermarking requirements arrive even earlier (November 2, 2025). Companies must simultaneously prepare for both regimes.

Overlapping but divergent requirements: Both frameworks demand transparency and safety reporting, but define “frontier” and high-risk systems differently. EU high-risk classification focuses on application domain (hiring, criminal justice, education), while US frontier classification emphasizes raw capability scale.

Documentation burden: Companies will need to maintain separate compliance dossiers. A model meeting EU transparency requirements may still fall short of RAISE’s frontier model standards, requiring supplementary documentation.

What This Means for Irish Tech

Ireland’s distributed enforcement model—with the new AI Office coordinating across 15 sectoral regulators—will need to account for US federal oversight. The Central Bank of Ireland, overseeing financial AI, may now coordinate with US Treasury and banking regulators on frontier model deployment.

For developers, the practical implication is immediate: frontier model capability roadmaps should now factor in dual-compliance timelines. Companies planning to deploy generative AI systems in 2026 can’t treat EU and US frameworks as sequential; they’re effectively concurrent.

Open Questions

  • How will regulators define “frontier” in practice, and will US and EU definitions converge or diverge further?
  • Will companies need to maintain separate model versions for US vs. EU markets, or can unified compliance frameworks satisfy both?
  • How aggressively will the Trump Administration pursue enforcement under RAISE, given its legislative preference for lighter-touch frameworks?
  • Will Ireland’s AI Office establish formal coordination mechanisms with US regulators to reduce compliance friction?

The next 90 days will be critical for companies planning 2026 model deployments.


Source: Trump Administration National Policy Framework for Artificial Intelligence