Critical Vulnerabilities Discovered

Three critical security vulnerabilities have been disclosed in LangChain and LangGraph, the world’s most widely deployed AI framework family with over 847 million total downloads. The flaws, ranging from CVSS scores of 7.3 to 9.3, could allow attackers to access sensitive filesystem data, steal API keys and environment secrets, and manipulate conversation histories.

The vulnerabilities include CVE-2026-34070 (CVSS 7.5), a path traversal flaw enabling arbitrary file access; CVE-2025-68664 (CVSS 9.3), a critical deserialization vulnerability leaking environment secrets; and CVE-2025-67644 (CVSS 7.3), an SQL injection flaw in LangGraph’s SQLite checkpoint system.

Massive Scale of Exposure

The timing couldn’t be worse given LangChain’s explosive adoption. Recent download statistics show over 52 million LangChain downloads, 23 million LangChain-Core downloads, and 9 million LangGraph downloads in just one week. This widespread usage means potentially hundreds of thousands of enterprise AI applications could be at risk.

The vulnerabilities represent a perfect storm for enterprise security teams, as each flaw exposes different classes of sensitive data that AI applications typically handle in production environments.

Immediate Action Required

Fixes have been released across affected versions: path traversal issues are resolved in LangChain core version 1.2.22 and above, deserialization flaws are patched in versions 0.3.81 and 1.2.5, and SQL injection issues are fixed in LangGraph checkpoint SQLite version 3.0.1.

Security experts are warning that threat actors typically move quickly to exploit AI infrastructure vulnerabilities, with exploitation attempts often beginning within hours of public disclosure. Organizations should expect immediate scanning and probing of their LangChain deployments.

This disclosure comes amid concerning trends in AI security, including at least 35 new CVE entries in March 2026 directly attributed to AI-generated code, up from just six in January. Security researchers estimate the real number could be five to ten times higher across the open-source ecosystem.

Open Questions

While patches are available, it remains unclear how many organizations have implemented proper inventory management for their AI framework dependencies. The rapid adoption of LangChain in production environments may have outpaced security practices, leaving many deployments vulnerable until systematic updates can be completed.


Source: Multiple Security Advisories