CVE-2026-34756
Chainguard vulnerability analysis and mitigation

vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.19.0, a Denial of Service vulnerability exists in the vLLM OpenAI-compatible API server. Due to the lack of an upper bound validation on the n parameter in the ChatCompletionRequest and CompletionRequest Pydantic models, an unauthenticated attacker can send a single HTTP request with an astronomically large n value. This completely blocks the Python asyncio event loop and causes immediate Out-Of-Memory crashes by allocating millions of request object copies in the heap before the request even reaches the scheduling queue. This vulnerability is fixed in 0.19.0.


SourceNVD

Related Chainguard vulnerabilities:

CVE ID

Severity

Score

Technologies

Component name

CISA KEV exploit

Has fix

Published date

CVE-2026-41242CRITICAL9.4
  • JavaScriptJavaScript
  • librechat
NoYesApr 18, 2026
CVE-2026-40260MEDIUM6.9
  • PythonPython
  • litellm
NoYesApr 17, 2026
CVE-2026-40293MEDIUM6.5
  • WolfiWolfi
  • openfga
NoYesApr 17, 2026
CVE-2026-40347MEDIUM5.3
  • PythonPython
  • vllm-openai-cuda-12.9
NoYesApr 18, 2026
CVE-2026-6491MEDIUM4.8
  • WolfiWolfi
  • libvips
NoNoApr 17, 2026

Free Vulnerability Assessment

Benchmark your Cloud Security Posture

Evaluate your cloud security practices across 9 security domains to benchmark your risk level and identify gaps in your defenses.

Request assessment

Get a personalized demo

Ready to see Wiz in action?

"Best User Experience I have ever seen, provides full visibility to cloud workloads."
David EstlickCISO
"Wiz provides a single pane of glass to see what is going on in our cloud environments."
Adam FletcherChief Security Officer
"We know that if Wiz identifies something as critical, it actually is."
Greg PoniatowskiHead of Threat and Vulnerability Management