The Vulnerability Detection Paradox

Anthropic’s Claude Mythos model has identified thousands of zero-day vulnerabilities across every major operating system and web browser through Project Glasswing—including a 27-year-old flaw in OpenBSD, a security-focused system trusted in firewalls and routers globally.

Yet this breakthrough raises a troubling question: if AI can find vulnerabilities faster than humans can patch them, are we actually becoming more secure?

The answer, emerging from recent security research, appears to be no.

The Core Problem: Discovery Outpacing Remediation

Project Glasswing, run in partnership with Amazon Web Services, Apple, Google, Microsoft, NVIDIA, and others, has deliberately restricted access to Mythos precisely because of this security paradox. The model’s capability to discover flaws far exceeds any organisation’s capacity to develop, test, and deploy patches.

This creates what security researchers are calling the “vulnerability discovery-to-patching gap”—a window where organisations know about exploitable flaws but cannot remediate them quickly enough. Meanwhile, the same AI capabilities that benefit defenders could easily be weaponised by adversaries.

For European and Irish organisations operating under the EU AI Act’s emerging compliance frameworks, this raises urgent questions about how gated access models and disclosure responsibilities should interact with regulatory transparency requirements.

The Compounding Effect: AI-Generated Vulnerabilities

The problem deepens when combined with a parallel trend. Seven in 10 organisations report confirmed or suspected security vulnerabilities introduced by AI-generated code in production systems. Yet 92% of organisations with confirmed AI vulnerabilities say their detection tools still work—because detection is happening after deployment, not in the development pipeline.

This suggests organisations are running detection races they’re already losing: finding problems too late to prevent impact.

Adversary Activity Is Accelerating

Meanwhile, the threat landscape is intensifying. CrowdStrike’s 2026 Global Threat Report documents an 89% year-over-year increase in AI-enabled adversary activity. Average lateral movement time has dropped to 29 minutes post-compromise, with documented cases showing attackers achieving initial access, lateral movement, and data exfiltration within four minutes.

AI isn’t just helping defenders find vulnerabilities—it’s accelerating attacker timelines catastrophically.

Practical Implications for Irish and European Builders

For organisations building in Ireland and across the EU, several immediate actions are critical:

  • Shift left on vulnerability management: Move detection from post-deployment to development pipeline, particularly for AI-generated code
  • Assume breach velocity: Plan patch timelines assuming attackers operate on 29-minute lateralisation windows, not weeks
  • Audit AI-assisted development: Seven in 10 organisations have AI code vulnerabilities—yours likely does too
  • Prepare for disclosure pressure: As regulatory sandboxes mature under the EU AI Act, vulnerability disclosure expectations will tighten

Open Questions

As this dual-use capability landscape evolves, critical uncertainties remain:

  • How should gated access to powerful AI security tools be balanced against competitive fairness?
  • What disclosure timelines should regulators mandate for AI-discovered vulnerabilities?
  • Can patching infrastructure ever match AI discovery velocity?
  • Should the EU AI Act’s transparency requirements extend to AI-discovered security flaws?

The field has reached an inflection point where AI’s defensive benefits are inseparable from its offensive risks. Managing that balance will define AI security maturity in 2026.


Source: CrowdStrike 2026 Global Threat Report & Anthropic Project Glasswing