CVE-2026-34753
Chainguard vulnerability analysis and mitigation

vLLM is an inference and serving engine for large language models (LLMs). From 0.16.0 to before 0.19.0, a server-side request forgery (SSRF) vulnerability in download_bytes_from_url allows any actor who can control batch input JSON to make the vLLM batch runner issue arbitrary HTTP/HTTPS requests from the server, without any URL validation or domain restrictions. This can be used to target internal services (e.g. cloud metadata endpoints or internal HTTP APIs) reachable from the vLLM host. This vulnerability is fixed in 0.19.0.


SourceNVD

Related Chainguard vulnerabilities:

CVE ID

Severity

Score

Technologies

Component name

CISA KEV exploit

Has fix

Published date

CVE-2026-41242CRITICAL9.4
  • JavaScriptJavaScript
  • librechat
NoYesApr 18, 2026
CVE-2026-40260MEDIUM6.9
  • PythonPython
  • litellm
NoYesApr 17, 2026
CVE-2026-40293MEDIUM6.5
  • WolfiWolfi
  • openfga
NoYesApr 17, 2026
CVE-2026-40347MEDIUM5.3
  • PythonPython
  • vllm-openai-cuda-12.9
NoYesApr 18, 2026
CVE-2026-6491MEDIUM4.8
  • WolfiWolfi
  • libvips
NoNoApr 17, 2026

Free Vulnerability Assessment

Benchmark your Cloud Security Posture

Evaluate your cloud security practices across 9 security domains to benchmark your risk level and identify gaps in your defenses.

Request assessment

Get a personalized demo

Ready to see Wiz in action?

"Best User Experience I have ever seen, provides full visibility to cloud workloads."
David EstlickCISO
"Wiz provides a single pane of glass to see what is going on in our cloud environments."
Adam FletcherChief Security Officer
"We know that if Wiz identifies something as critical, it actually is."
Greg PoniatowskiHead of Threat and Vulnerability Management