CVE-2025-62426
vLLM vulnerability analysis and mitigation

Overview

CVE-2025-62426 affects vLLM, an inference and serving engine for large language models (LLMs), from versions 0.5.5 to before 0.11.1. The vulnerability exists in the /v1/chat/completions and /tokenize endpoints where the chattemplatekwargs request parameter is processed before proper validation against the chat template. The issue was discovered and disclosed on November 20, 2025, and has been patched in version 0.11.1 (GitHub Advisory).

Technical details

The vulnerability stems from the servingengine.py component where chattemplatekwargs are unpacked into kwargs and passed to chatutils.py applyhfchat_template without proper validation of keys or values. This allows overriding optional parameters like tokenize, changing its default from False to True. The vulnerability has been assigned a CVSS v3.1 base score of 6.5 (Medium) with vector string CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H, indicating network attack vector, low attack complexity, and high impact on availability (GitHub Advisory).

Impact

The vulnerability can lead to a denial of service condition where any authenticated user can cause the vLLM server to block processing for extended periods through Chat Completion or Tokenize requests. Since tokenization is a blocking operation, sufficiently large input can block the API server's event loop, preventing the handling of all other requests until the tokenization completes (GitHub Advisory).

Mitigation and workarounds

The vulnerability has been patched in vLLM version 0.11.1. The fix involves enforcing tokenize=False when applying chat templates and rejecting tokenize and chattemplate parameters within chattemplate_kwargs. Users should upgrade to version 0.11.1 or later to address this vulnerability (GitHub Advisory).

Additional resources


SourceThis report was generated using AI

Related vLLM vulnerabilities:

CVE ID

Severity

Score

Technologies

Component name

CISA KEV exploit

Has fix

Published date

CVE-2025-62164HIGH8.8
  • vLLMvLLM
  • vllm
NoYesNov 21, 2025
CVE-2025-62372HIGH8.3
  • vLLMvLLM
  • vllm
NoYesNov 21, 2025
CVE-2025-6242HIGH7.1
  • ChainguardChainguard
  • py3-vllm-cuda-12.4
NoYesOct 07, 2025
CVE-2025-62426MEDIUM6.5
  • vLLMvLLM
  • vllm
NoYesNov 21, 2025
CVE-2025-61620N/AN/A
  • ChainguardChainguard
  • py3-vllm-cuda-12.4
NoYesOct 08, 2025

Free Vulnerability Assessment

Benchmark your Cloud Security Posture

Evaluate your cloud security practices across 9 security domains to benchmark your risk level and identify gaps in your defenses.

Request assessment

Get a personalized demo

Ready to see Wiz in action?

"Best User Experience I have ever seen, provides full visibility to cloud workloads."
David EstlickCISO
"Wiz provides a single pane of glass to see what is going on in our cloud environments."
Adam FletcherChief Security Officer
"We know that if Wiz identifies something as critical, it actually is."
Greg PoniatowskiHead of Threat and Vulnerability Management