vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.
History

Wed, 12 Feb 2025 21:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'no', 'Exploitation': 'none', 'Technical Impact': 'partial'}, 'version': '2.0.3'}


Mon, 10 Feb 2025 13:30:00 +0000


Fri, 07 Feb 2025 20:15:00 +0000

Type Values Removed Values Added
Description vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.
Title vLLM using built-in hash() from Python 3.12 leads to predictable hash collisions in vLLM prefix cache
Weaknesses CWE-354
References
Metrics cvssV3_1

{'score': 2.6, 'vector': 'CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:N/I:L/A:N'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2025-02-12T20:51:46.402Z

Reserved: 2025-02-03T19:30:53.399Z

Link: CVE-2025-25183

cve-icon Vulnrichment

Updated: 2025-02-12T20:48:59.400Z

cve-icon NVD

Status : Received

Published: 2025-02-07T20:15:34.083

Modified: 2025-02-07T20:15:34.083

Link: CVE-2025-25183

cve-icon Redhat

Severity : Low

Publid Date: 2025-02-06T20:00:05Z

Links: CVE-2025-25183 - Bugzilla