Critical AI Security Vulnerabilities Surge as Ireland Prepares EU AI Office
Multiple high-severity AI vulnerabilities discovered in Chrome, CrewAI, and GitHub Copilot as Ireland establishes National AI Office
Critical AI Security Flaws Emerge
Security researchers have uncovered a troubling series of high-severity vulnerabilities in popular AI platforms, highlighting growing risks as Ireland prepares to establish its National AI Office by August 2026.
Unit 42 discovered CVE-2026-0628, a critical flaw in Chrome’s Gemini Live panel allowing malicious extensions to hijack the AI assistant and access cameras and microphones. Meanwhile, OpenClaw’s WebSocket gateway suffered a CVSS 9.9 privilege escalation vulnerability, enabling attackers to gain admin access with remote code execution.
Perhaps most concerning, four CVEs in CrewAI enable attackers to chain prompt injection attacks into remote code execution, server-side request forgery, and unauthorised file access—affecting default configurations used by thousands of developers.
Supply Chain Attacks Target AI Infrastructure
A coordinated supply chain campaign from March 19-31 compromised multiple open-source projects, including the widely-used Axios npm package and LiteLLM AI proxy. Attackers poisoned CI/CD pipelines and published malicious versions with remote access trojans, potentially affecting millions of developer environments.
Stanford’s HAI AI Index Report reveals publicly reported AI security incidents increased 56.4% from 2023 to 2024, with acceleration continuing into 2026.
Irish and European Context
The timing is particularly significant as Ireland published the General Scheme of the Regulation of Artificial Intelligence Bill 2026 in February, implementing the EU AI Act at national level. The forthcoming National AI Office will manage a regulatory sandbox while ensuring compliance with EU standards.
Experts warn Ireland faces heightened cyber threats and AI-driven attacks as it prepares for its 2026 EU presidency, including potential attacks on service providers and disinformation campaigns.
Practical Implications
For AI builders and users, these developments underscore critical security gaps. One survey found 83% of organisations plan to deploy agentic AI capabilities, yet only 29% report being ready to operate them securely.
Developers should immediately audit AI integrations, implement robust input validation, and establish monitoring for unusual AI behaviour. Organisations must balance innovation with security as regulatory frameworks crystallise.
Open Questions
How will Ireland’s National AI Office coordinate with existing cybersecurity frameworks? Can regulatory sandboxes adequately test security while fostering innovation? The intersection of rapid AI deployment and mounting security threats demands urgent attention from both builders and regulators.