CVE-2026-34070
LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions
Description
LangChain is a framework for building agents and LLM-powered applications. Prior to version 1.2.22, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). This issue has been patched in version 1.2.22.
INFO
Published Date :
March 31, 2026, 3:15 a.m.
Last Modified :
April 2, 2026, 5:04 p.m.
Remotely Exploit :
Yes !
Source :
[email protected]
Affected Products
The following products are affected by CVE-2026-34070
vulnerability.
Even if cvefeed.io is aware of the exact versions of the
products
that
are
affected, the information is not represented in the table below.
CVSS Scores
| Score | Version | Severity | Vector | Exploitability Score | Impact Score | Source |
|---|---|---|---|---|---|---|
| CVSS 3.1 | HIGH | [email protected] |
Solution
- Update LangChain to version 1.2.22 or later.
- Validate file paths in prompt configurations.
- Sanitize user-supplied input for prompt loading.
Public PoC/Exploit Available at Github
CVE-2026-34070 has a 4 public
PoC/Exploit available at Github.
Go to the Public Exploits tab to see the list.
References to Advisories, Solutions, and Tools
Here, you will find a curated list of external links that provide in-depth
information, practical solutions, and valuable tools related to
CVE-2026-34070.
| URL | Resource |
|---|---|
| https://github.com/langchain-ai/langchain/commit/27add913474e01e33bededf4096151130ba0d47c | Patch |
| https://github.com/langchain-ai/langchain/releases/tag/langchain-core==1.2.22 | Release Notes |
| https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54 | Exploit Vendor Advisory |
| https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54 | Exploit Vendor Advisory |
CWE - Common Weakness Enumeration
While CVE identifies
specific instances of vulnerabilities, CWE categorizes the common flaws or
weaknesses that can lead to vulnerabilities. CVE-2026-34070 is
associated with the following CWEs:
Common Attack Pattern Enumeration and Classification (CAPEC)
Common Attack Pattern Enumeration and Classification
(CAPEC)
stores attack patterns, which are descriptions of the common attributes and
approaches employed by adversaries to exploit the CVE-2026-34070
weaknesses.
We scan GitHub repositories to detect new proof-of-concept exploits. Following list is a collection of public exploits and proof-of-concepts, which have been published on GitHub (sorted by the most recently updated).
None
Dockerfile Python Shell TypeScript
None
A curated timeline of real AI agent security incidents, breaches, and vulnerabilities (2024-2026). Every entry sourced and dated.
ai-agent-security ai-agents ai-security awesome-list cybersecurity llm-security mcp-security prompt-injection supply-chain-security adversarial-attacks agent-security agentic-ai ai-attacks ai-safety cve incident-response owasp red-team security-research vulnerability
📡 PoC auto collect from GitHub. ⚠️ Be careful Malware.
security cve exploit poc vulnerability
Results are limited to the first 15 repositories due to potential performance issues.
The following list is the news that have been mention
CVE-2026-34070 vulnerability anywhere in the article.
The following table lists the changes that have been made to the
CVE-2026-34070 vulnerability over time.
Vulnerability history details can be useful for understanding the evolution of a vulnerability, and for identifying the most recent changes that may impact the vulnerability's severity, exploitability, or other characteristics.
-
Initial Analysis by [email protected]
Apr. 02, 2026
Action Type Old Value New Value Added CPE Configuration OR *cpe:2.3:a:langchain:langchain:*:*:*:*:*:*:*:* versions up to (excluding) 1.2.22 Added Reference Type GitHub, Inc.: https://github.com/langchain-ai/langchain/commit/27add913474e01e33bededf4096151130ba0d47c Types: Patch Added Reference Type GitHub, Inc.: https://github.com/langchain-ai/langchain/releases/tag/langchain-core==1.2.22 Types: Release Notes Added Reference Type CISA-ADP: https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54 Types: Exploit, Vendor Advisory Added Reference Type GitHub, Inc.: https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54 Types: Exploit, Vendor Advisory -
CVE Modified by 134c704f-9b21-4f2e-91b3-4a467353bcc0
Mar. 31, 2026
Action Type Old Value New Value Added Reference https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54 -
New CVE Received by [email protected]
Mar. 31, 2026
Action Type Old Value New Value Added Description LangChain is a framework for building agents and LLM-powered applications. Prior to version 1.2.22, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). This issue has been patched in version 1.2.22. Added CVSS V3.1 AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N Added CWE CWE-22 Added Reference https://github.com/langchain-ai/langchain/commit/27add913474e01e33bededf4096151130ba0d47c Added Reference https://github.com/langchain-ai/langchain/releases/tag/langchain-core==1.2.22 Added Reference https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54